No project description provided
Project description
Monitored IO Loop
A production ready monitored IO loop for Python.
No more wondering why your event loop (or random pieces of your code) are suddenly popping up as slow in your monitoring.
Getting started
Installation
pip install monitored_ioloop # For the default event loop
pip install monitored_ioloop[uvloop] # For the the additional support of the uvloop event loop
Demo
:pencil2: Play with the demo in sandbox
Usage
Asyncio event loop
from monitored_ioloop.monitored_asyncio import MonitoredAsyncIOEventLoopPolicy
from monitored_ioloop.monitoring import IoLoopMonitorState
import asyncio
import time
def monitor_callback(ioloop_state: IoLoopMonitorState) -> None:
print(ioloop_state)
async def test_coroutine() -> None:
time.sleep(2)
def main():
asyncio.set_event_loop_policy(MonitoredAsyncIOEventLoopPolicy(monitor_callback))
asyncio.run(test_coroutine())
Uvloop event loop
In order to use the uvloop event loop, please make sure to install monitored_ioloop[uvloop]
.
The usage is the same as the asyncio event loop, but with monitored_ioloop.monitored_uvloop.MonitoredUvloopEventLoopPolicy
instead of the monitored_ioloop.monitored_asyncio.MonitoredAsyncIOEventLoopPolicy
.
The monitor callback
The monitor callback will be called for every execution that the event loop initiates.
With every call you will receive an IoLoopMonitorState object that contains the following information:
callback_wall_time
: Wall executing time of the callback.loop_handles_count
: The amount of handles (think about them as tasks) that the IO loop is currently handling.loop_lag
: The amount of time it took from the moment the task was added to the loop until it was executed.callback_pretty_name
: The pretty name of the callback that was executed
Please Note: This is a best effort, the name of the callback may still be of little help, depending on the specific callback implementation.
Performance impact
As many of you might be concerned about the performance impact of this library, I have run some benchmarks to measure the performance impact of this library.
In summary the performance impact is negligible for most use cases.
You can find the more detailed information in the following README.md.
Usage examples
You can find examples projects showing potential use cases in the examples folder.
Currently there is only the fastapi with prometheus exporter example but more will be added in the future.
Roadmap
- Add support for the amount of
Handle
's on the event loop - Add an examples folder
- Add loop lag metric (Inspired from nodejs loop monitoring)
- Add visibility into which
Handle
are making the event loop slower - Add easier integration with
uvicorn
- Add easier integration with popular monitoring tools like Prometheus
Credits
- I took a lot of inspiration from the uvloop project with everything regarding the user interface of swapping the IO loop.
- The great pycon talk - https://www.youtube.com/watch?v=GSiZkP7cI80&t=16s
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for monitored_ioloop-0.0.10-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4fa4e990576ed04d554618ab47d36105be1e930b9955177470fb8bfd69886694 |
|
MD5 | 85c584c419a39dc40f97de81fbf3dc09 |
|
BLAKE2b-256 | 1d8606f1490a8cceef258c99316a41c56af30d18bb85e728e75e347ddd8196a3 |