A powerful, extensible Python logging library that enables distributed log collection across multiple Python servers with support for multiple message brokers and asynchronous database operations.
Project description
Loghive - Distributed Logger ๐
A robust, scalable Python logging library that enables distributed log collection with advanced connection management, automatic reconnection, and thread-safe logging capabilities. ๐
Core Components ๐ ๏ธ
1. LoggerClient ๐
- ๐งต Thread-safe logging client with automatic reconnection
- โณ Exponential backoff retry mechanism
- ๐ฉบ Connection health monitoring
- ๐จ Durable message delivery
2. Consumer ๐ก๏ธ
- โ๏ธ Scalable message consumption
- ๐ฆ Batch processing capabilities
- ๐ Error handling and recovery
- ๐ป Multi-threaded architecture
Features โจ
Logger Client Features ๐ง
- Thread-Safe Operations:
- ๐ Thread-safe logging with mutex locks
- ๐ฅ Concurrent access handling
- โ Safe connection management
- Robust Connection Management:
- ๐ Automatic reconnection with exponential backoff
- ๐ฉบ Connection health monitoring
- โฑ๏ธ Configurable heartbeat (600 seconds)
- โณ Connection timeout protection (300 seconds)
- ๐ Socket timeout (10 seconds)
- Reliable Message Delivery:
- ๐ Durable message queues
- ๐พ Message persistence
- โ Delivery confirmation
- ๐ Automatic retry on failure
- Flexible Log Routing:
- ๐ ๏ธ Service-specific routing
- ๐ Log level-based queues
- ๐งฉ Dynamic queue declaration
- ๐ Direct exchange support
Consumer Features ๐ก๏ธ
- Advanced Message Queue Management:
- โณ Configurable message TTL (7 days default) - Messages automatically expire after a set time period to prevent queue overflow.
- ๐ Maximum queue length limits - Set hard limits on queue size to protect system resources and maintain performance.
- ๐ช Backpressure handling - Automatically manages message flow when the system is under heavy load to prevent crashes.
- Scalable Processing:
- ๐งต Multi-threaded message processing - Parallel processing of messages across multiple threads for improved throughput.
- ๐ฆ Batch processing support - Groups messages into batches for efficient bulk processing and reduced database load.
- โ๏ธ Configurable worker pool - Adjust the number of worker threads based on your system's capacity and requirements.
- Error Recovery:
- ๐ฅ Failure backoff queue - Stores failed messages separately for retry with exponential backoff to prevent system overload.
- ๐ Automatic retry mechanism - Intelligently retries failed operations with configurable attempts and delays.
- โ JSON validation - Ensures message integrity by validating JSON structure before processing to prevent data corruption.
Installation ๐ ๏ธ
pip install loghive
Usage ๐
Configuration file setup
Create a config.env file for the service to fetch the connection parameters for rabbitmq, rabbitmq and email connection
# Basic Configurations
LOG_LEVEL=DEBUG
# Database Configurations
POSTGRES_DB_HOST=localhost
POSTGRES_DB_USER=***
POSTGRES_DB_NAME=***
POSTGRES_DB_PASSWORD=***
POSTGRES_DB_PORT=***
# RabbitMQ & Consumer Configurations
QUEUE_HOST=localhost
QUEUE_USER=***
QUEUE_PASSWORD=***
QUEUE_PORT=***
QUEUE_MAX_SIZE=10000000
# Consumer
CONSUMER_BATCH_SIZE=1000
# Monitoring
ENABLE_EMAIL_MONITORING=False
EMAIL_HOST=***
EMAIL_PORT=***
EMAIL_SENDER_EMAIL=***
EMAIL_SENDER_PASSWORD=***
Logger Client Setup ๐
from loghive.logger.rabbitmqlogger import LoggerClient
# Initialize the logger
logger = LoggerClient(
service_name="my-service",
rabbitmq_url="amqp://localhost:5672/"
)
# Log messages with different levels
logger.log("INFO", "User logged in", {"user_id": "123"})
logger.log("ERROR", "Database connection failed", {"retry_count": 3})
logger.log("WARNING", "High memory usage", {"usage_percent": 85})
Consumer
from loghive.consumer.rabbitmqconsumer import start_consumer
from loghive.main.settings import internal_logger
try:
start_consumer(["flask_service"]) # replace with your service names
except Exception as e:
internal_logger.error(f"Error faced while starting consumer: {e}")
The internal_logger can be imported from loghive.main.settings, this will be behave like a normal logger and will
not be publishing the message to rabbitmq.
Message Structure ๐ฆ
{
"service": "service_name",
"level": "INFO",
"message": "Log message",
"information": {
"# Additional context as dictionary"
},
"timestamp": "2024-12-27 10:30:45"
}
Connection Configuration โ๏ธ
connection_params = {
"heartbeat": 600, # Heartbeat interval in seconds
"blocked_connection_timeout": 300, # Connection timeout in seconds
"socket_timeout": 10, # Socket timeout in seconds
}
Queue Settings ๐
QUEUE_ARGUMENTS = {
"x-message-ttl": 604800000, # 7 days in milliseconds
"x-max-length": 1000000, # Maximum queue size
}
Architecture ๐๏ธ
Logger Client Architecture ๐๏ธ
+----------------+ +------------------+ +----------------+
| Application | | LoggerClient | | RabbitMQ |
| Code | --> | - Thread Safety | --> | Exchange |
| | | - Auto Reconnect | | (Direct) |
+----------------+ | - Retry Logic | +----------------+
+------------------+
Message Flow ๐
1. Application generates log
โ
2. LoggerClient validates and formats message
โ
3. Thread-safe connection check
โ
4. Publish with retry mechanism
โ
5. RabbitMQ confirms delivery
โ
6. Consumer processes message
Error Handling โ ๏ธ
Logger Client Error Recovery ๐ก๏ธ
- ๐ Connection failures trigger automatic reconnection
- โณ Exponential backoff between retry attempts (1-30 seconds)
- ๐ซ Maximum of 3 retry attempts per operation
- ๐ฉบ Separate monitoring thread for connection health
- ๐ Thread-safe operation handling
Message Delivery Guarantees โ
- ๐ Durable queues and exchanges
- ๐พ Persistent messages (delivery_mode=2)
- โ Message acknowledgment
- ๐ ๏ธ Automatic queue declaration
- ๐ Connection recovery
Best Practices โ
- Initialization:
logger = LoggerClient(
service_name="unique-service-name",
rabbitmq_url="amqp://username:password@host:port/vhost"
)
- Graceful Shutdown:
# Always close the logger when done
logger.close()
- Error Handling:
try:
# Your application code
logger.log("INFO", "Operation successful")
except Exception as e:
logger.log("ERROR", "Operation failed", {"error": str(e)})
- Structured Logging:
logger.log(
"INFO",
"User action completed",
{
"user_id": "123",
"action": "checkout",
"duration_ms": 150
}
)
Monitoring ๐
The logger provides built-in monitoring for:
- ๐ฉบ Connection status
- โ Message delivery success/failure
- ๐ Retry attempts
- ๐ Queue health
- ๐งต Thread status
Performance Considerations โก
- ๐งต Thread-safe operations may impact throughput
- ๐ฉบ Connection monitoring adds minimal overhead
- ๐ Retry mechanisms prevent message loss
- โฑ๏ธ Heartbeat monitoring ensures connection health
- ๐ซ Socket timeouts prevent hanging operations
Contributing ๐ค
See our Contributing Guide for details on how to contribute to this project.
License ๐
This project is licensed under the MIT License - see the LICENSE file for details.
Support ๐ฌ
For issues and help:
- ๐ Check the documentation
- ๐ Review existing issues
- ๐ Create a new issue with detailed reproduction steps
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file loghive-1.0.0b1.tar.gz.
File metadata
- Download URL: loghive-1.0.0b1.tar.gz
- Upload date:
- Size: 22.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.0.1 CPython/3.13.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
384b47f11f7a7dfbc9c6a8db0acce8acda3c94847ce39c96fe88de20dd41581d
|
|
| MD5 |
626b6817e78da0124d740dd1e118f48c
|
|
| BLAKE2b-256 |
8bdf940be7d8d40d2020592d3f89c5ed7545e4013da66a2c8a0d1c0a44280020
|
File details
Details for the file loghive-1.0.0b1-py3-none-any.whl.
File metadata
- Download URL: loghive-1.0.0b1-py3-none-any.whl
- Upload date:
- Size: 23.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.0.1 CPython/3.13.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d25c71fbc5c7ecab6a2d2e1438c01ac3127f2030efb85bfdc5ae6fcc7b63509f
|
|
| MD5 |
21ebff1ba580f0ab0c150b55720709ca
|
|
| BLAKE2b-256 |
4235bbe465b48c7709d54fe779a9ef26706c0634f19b6f083c233dec76a095cf
|