TurboMQ - Message Queue System
Project description
TurboMQ
=======
**TurboMQ** is a simple message queue system. I hope it is fast enough
to merit the name. In our test it could provide and consume millions of
messages in a second. But we delegate the final judgment to the
developers who use the library. Consider that , currently, it is too
experimental and there will be dramatical change in both functionality
and protocols.
Why TurboMQ is developed?
=========================
First, I want to explain why a new message queue system is developed.
There are many message queue systems available and some of them are
popular and stable like **RabbitMQ** or **ZMQ**. The most important
reason behind this implementation is that most of message queue systems
are designed to handle backend processing like distributing jobs between
nodes to process huge amount of data or just complete the remained part
of a business transaction. Certainly, TurboMQ can be used to distribute
works between nodes. Moreover, it originally designed to support
millions of providers and consumers working with millions of queues and
topics.
The most close (as queue functionality) system is **Redis**. It has a
remarkable IO mechanism to handle network connections. However, it can
just utilize one core for one instance. Do we really want to use just
one core of for example 8 available cores? Or do we want to configure
clustering inside one machine to just use all available cores?
**ZMQ** is a good library. It is fast, stable and useful for many
purposes. Nonetheless, there is a serious problem in topic-based PUB-SUB
queues. The consumers (subscribers) has to be connected before providers
`(missing message problem solver)`_ otherwise the message is going to be
lost.
Technical information
=====================
**TurboMQ** is a python module. To avoid GIL problems, it is developed
using pure **C** and **Cython**. It uses its own event loop system. The
benefit is that it is a real multi-threaded event loop and can exploit
all available cores. The drawback is that it does not support windows.
Are the bad news finished? No, kqueue has not implemented yet and it
uses (slow) posix POLL in BSD families. Is there any other good news?
Yes, windows and kqueue support is going to be implemented very soon.
Installation
============
Installation is easy. The package can be installed by pip:
$ sudo pip install turbomq
You need to download or clone it and then type the python magic:
$ sudo python setup.py install
Usage
============
To use **TurboMQ** just import and run the server. The following code runs a server for 10 minutes.
.. code-block:: python
from turbomq import TurboEngine
import time
# You can pass the thread count as a second parameter.
# Otherwise, it will automatically selects 4 threads per core.
e = TurboEngine('tcp://127.0.0.1:33444')
e.run()
# "run" method will not block the main thread.
# So you need to simply wait or run your own loop as you want.
time.sleep(10.0 * 60)
# "stop" method just shuts TCP sockets down.
e.stop()
# After destroy all resources will be freed.
# Then you can not use this instance anymore.
e.destroy()
This code sends a message to server and receives it again.
.. code-block:: python
from turbomq import TurboClient
# Connects to the server.
c = TurboClient('tcp://127.0.0.1:33444')
# Creates a mirror queue in client side.
q = c.get_queue('test')
# Both topic key and data is mandatory in push.
q.push('hello', 'turbo')
# In pop you need to determine a timeout.
# So this will wait two seconds. If timeout is exceeded, it will return None.
print(q.pop('hello', 2))
=======
**TurboMQ** is a simple message queue system. I hope it is fast enough
to merit the name. In our test it could provide and consume millions of
messages in a second. But we delegate the final judgment to the
developers who use the library. Consider that , currently, it is too
experimental and there will be dramatical change in both functionality
and protocols.
Why TurboMQ is developed?
=========================
First, I want to explain why a new message queue system is developed.
There are many message queue systems available and some of them are
popular and stable like **RabbitMQ** or **ZMQ**. The most important
reason behind this implementation is that most of message queue systems
are designed to handle backend processing like distributing jobs between
nodes to process huge amount of data or just complete the remained part
of a business transaction. Certainly, TurboMQ can be used to distribute
works between nodes. Moreover, it originally designed to support
millions of providers and consumers working with millions of queues and
topics.
The most close (as queue functionality) system is **Redis**. It has a
remarkable IO mechanism to handle network connections. However, it can
just utilize one core for one instance. Do we really want to use just
one core of for example 8 available cores? Or do we want to configure
clustering inside one machine to just use all available cores?
**ZMQ** is a good library. It is fast, stable and useful for many
purposes. Nonetheless, there is a serious problem in topic-based PUB-SUB
queues. The consumers (subscribers) has to be connected before providers
`(missing message problem solver)`_ otherwise the message is going to be
lost.
Technical information
=====================
**TurboMQ** is a python module. To avoid GIL problems, it is developed
using pure **C** and **Cython**. It uses its own event loop system. The
benefit is that it is a real multi-threaded event loop and can exploit
all available cores. The drawback is that it does not support windows.
Are the bad news finished? No, kqueue has not implemented yet and it
uses (slow) posix POLL in BSD families. Is there any other good news?
Yes, windows and kqueue support is going to be implemented very soon.
Installation
============
Installation is easy. The package can be installed by pip:
$ sudo pip install turbomq
You need to download or clone it and then type the python magic:
$ sudo python setup.py install
Usage
============
To use **TurboMQ** just import and run the server. The following code runs a server for 10 minutes.
.. code-block:: python
from turbomq import TurboEngine
import time
# You can pass the thread count as a second parameter.
# Otherwise, it will automatically selects 4 threads per core.
e = TurboEngine('tcp://127.0.0.1:33444')
e.run()
# "run" method will not block the main thread.
# So you need to simply wait or run your own loop as you want.
time.sleep(10.0 * 60)
# "stop" method just shuts TCP sockets down.
e.stop()
# After destroy all resources will be freed.
# Then you can not use this instance anymore.
e.destroy()
This code sends a message to server and receives it again.
.. code-block:: python
from turbomq import TurboClient
# Connects to the server.
c = TurboClient('tcp://127.0.0.1:33444')
# Creates a mirror queue in client side.
q = c.get_queue('test')
# Both topic key and data is mandatory in push.
q.push('hello', 'turbo')
# In pop you need to determine a timeout.
# So this will wait two seconds. If timeout is exceeded, it will return None.
print(q.pop('hello', 2))
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
turbomq-0.1.7.tar.gz
(65.2 kB
view details)
File details
Details for the file turbomq-0.1.7.tar.gz
.
File metadata
- Download URL: turbomq-0.1.7.tar.gz
- Upload date:
- Size: 65.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | dc87c26acfaa79f33cb7155a2b3293f50b00078aaa633d0cb1a32417fa7636aa |
|
MD5 | 7e5d0f4c3517e249c40ce3a2d7e92d82 |
|
BLAKE2b-256 | 75144dc7a9f221b5890561cb10630af8672ffc709acb5eb69fcc488b70eafe68 |