Skip to main content

grelmicro is a lightweight toolkit for building Python applications that need to coordinate work across processes

Project description

grelmicro

Async-first toolkit. Microservice patterns inside.

A Python toolkit for distributed systems: microservices, modular monoliths, and self-contained systems.

PyPI - Version PyPI - Python Version License: MIT codecov uv Ruff ty

Project status: Active development. grelmicro is pre-1.0. The public API is not yet stable. Breaking changes are allowed on MINOR bumps (0.14.00.15.0) and never on PATCH. Pin the minor: grelmicro>=0.14.0,<0.15.0. After 1.0.0, standard semver applies. See the versioning policy.


Documentation: https://grelinfo.github.io/grelmicro/

Source Code: https://github.com/grelinfo/grelmicro


Why grelmicro

Stop reinventing the wheel. grelmicro ships microservice patterns as small, composable modules with pluggable backends: locks, rate limits, circuit breakers, cache, logging, health checks, and task scheduling. Async-first, type-safe, and battle-tested in production.

It is built for any Python application that coordinates work across processes, workers, or replicas. The same primitives serve every distributed system, whether you call it microservices, a modular monolith, or a self-contained system. A distributed lock is a distributed lock whether your system is one process or fifty. It fits naturally into cloud-native applications, containerized apps, and Kubernetes deployments.

  • Micro: one focused primitive per module, each a canonical microservice pattern (distributed lock, leader election, rate limiter, circuit breaker, health check API, externalised configuration).
  • Fast: small footprint by design. We keep the layers thin so your code stays quick.
  • Async-first: every I/O call is async / await. Drops into FastAPI, FastStream, and any asyncio-based stack.
  • Backend-agnostic: each primitive is a protocol. Swap Redis for PostgreSQL or SQLite without touching application code.
  • Railguarded: 100% pytest coverage, ty-checked, ruff-linted, Pydantic-validated. Pre-1.0 API may shift on minor bumps. 1.x commits to standard semver.

Modules

Module Summary
Cache @cached decorator with per-key stampede protection. In-memory TTLCache or RedisCacheBackend.
Synchronization Distributed Lock, TaskLock, LeaderElection. Redis, PostgreSQL, SQLite, Kubernetes, in-memory.
Task Scheduler Periodic task execution with optional distributed locking. Lightweight, not a Celery replacement.
Resilience Circuit Breaker and Rate Limiter with pluggable algorithms (TokenBucketConfig, GCRAConfig).
Logging 12-factor logging with JSON, LOGFMT, TEXT, or PRETTY output, structured error rendering, and OpenTelemetry trace context.
Tracing Unified instrumentation. @instrument creates OpenTelemetry spans and enriches log records with structured context.
Health Health check registry with concurrent runners and FastAPI liveness / readiness integration.
JSON Fast JSON via orjson when available, with automatic fallback to stdlib json.

Installation

pip install grelmicro

See the Installation guide for uv and poetry commands, plus optional extras for Redis, PostgreSQL, SQLite, Kubernetes, OpenTelemetry, and structlog.

Example

FastAPI integration

Create a file main.py with:

import logging
from contextlib import asynccontextmanager

from fastapi import FastAPI, HTTPException, Request

import grelmicro
from grelmicro import cache, resilience, sync
from grelmicro.cache import JsonSerializer, TTLCache, cached
from grelmicro.cache.redis import RedisCacheBackend
from grelmicro.logging import configure_logging
from grelmicro.resilience import (
    CircuitBreaker,
    RateLimitExceededError,
    RateLimiter,
)
from grelmicro.resilience.redis import RedisRateLimiterBackend
from grelmicro.sync import LeaderElection, Lock
from grelmicro.sync.redis import RedisSyncBackend
from grelmicro.task import TaskManager

logger = logging.getLogger(__name__)

# === grelmicro ===
task = TaskManager()
sync.register(RedisSyncBackend("redis://localhost:6379/0"))
cache.register(RedisCacheBackend("redis://localhost:6379/0", prefix="myapp:"))
resilience.register(RedisRateLimiterBackend("redis://localhost:6379/0"))

leader_election = LeaderElection("leader-election")
task.add_task(leader_election)

ttl_cache = TTLCache(ttl=300, serializer=JsonSerializer())


# === FastAPI ===
@asynccontextmanager
async def lifespan(app):
    configure_logging()
    async with grelmicro.lifespan(task):
        yield


app = FastAPI(lifespan=lifespan)


# --- Cache: avoid redundant database queries ---
@cached(ttl_cache)
async def get_user(user_id: int) -> dict:
    return {"id": user_id, "name": "Alice"}


@app.get("/users/{user_id}")
async def read_user(user_id: int):
    return await get_user(user_id)


# --- Circuit Breaker: protect calls to an unreliable service ---
cb = CircuitBreaker("my-service")


@app.get("/")
async def read_root():
    async with cb:
        return {"Hello": "World"}


# --- Rate Limiter: protect endpoints from overload ---
api_limiter = RateLimiter.gcra("api", limit=100, window=60)


@app.get("/api")
async def api_endpoint(request: Request):
    try:
        await api_limiter.acquire_or_raise(key=request.client.host)
    except RateLimitExceededError as exc:
        raise HTTPException(
            status_code=429,
            detail="Too many requests",
            headers={"Retry-After": str(int(exc.retry_after))},
        )
    return {"status": "ok"}


# --- Distributed Lock: synchronize access to a shared resource ---
lock = Lock("shared-resource")


@app.get("/protected")
async def protected():
    async with lock:
        return {"status": "ok"}


# --- Interval Task: run locally on every worker ---
@task.interval(seconds=5)
def heartbeat():
    logger.info("heartbeat")


# --- Distributed Task: run once per interval across all workers ---
@task.interval(seconds=60, max_lock_seconds=300)
def cleanup():
    logger.info("cleanup")


# --- Leader-gated Task: only the leader executes ---
@task.interval(seconds=10, leader=leader_election)
def leader_only_task():
    logger.info("leader task")

License

This project is licensed under the terms of the MIT license.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

grelmicro-0.21.0.tar.gz (482.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

grelmicro-0.21.0-py3-none-any.whl (125.4 kB view details)

Uploaded Python 3

File details

Details for the file grelmicro-0.21.0.tar.gz.

File metadata

  • Download URL: grelmicro-0.21.0.tar.gz
  • Upload date:
  • Size: 482.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.13

File hashes

Hashes for grelmicro-0.21.0.tar.gz
Algorithm Hash digest
SHA256 dd159b1a6315a5fd896d1daa021a4e88cf4f26c6d72927758a8f348f2ac328ca
MD5 023512555065cc8fdd7ae2e0ab0ab8ff
BLAKE2b-256 14593fa01abc5c73905443f67b11a46cd43c6c0b72bbd3c685d08804f0d11c0b

See more details on using hashes here.

Provenance

The following attestation bundles were made for grelmicro-0.21.0.tar.gz:

Publisher: release.yml on grelinfo/grelmicro

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file grelmicro-0.21.0-py3-none-any.whl.

File metadata

  • Download URL: grelmicro-0.21.0-py3-none-any.whl
  • Upload date:
  • Size: 125.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.13

File hashes

Hashes for grelmicro-0.21.0-py3-none-any.whl
Algorithm Hash digest
SHA256 3625656f3da3882772d75107f1e3eff802dcba9d166a69b10753988d17781063
MD5 dccf58495620b0844ea4e0a0e3582eb7
BLAKE2b-256 9fbd0a92895195c7be4ec78f8cc5ddf5a4d0c2348e83ce0962b8a49ac66d4111

See more details on using hashes here.

Provenance

The following attestation bundles were made for grelmicro-0.21.0-py3-none-any.whl:

Publisher: release.yml on grelinfo/grelmicro

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page