Skip to main content

grelmicro is a lightweight toolkit for building Python applications that need to coordinate work across processes

Project description

grelmicro

Async-first toolkit. Microservice patterns inside.

A Python toolkit for distributed systems: microservices, modular monoliths, and self-contained systems.

PyPI - Version PyPI - Python Version License: MIT codecov uv Ruff ty

Project status: Active development. grelmicro is pre-1.0. The public API is not yet stable. Breaking changes are allowed on MINOR bumps (0.14.00.15.0) and never on PATCH. Pin the minor: grelmicro>=0.14.0,<0.15.0. After 1.0.0, standard semver applies. See the versioning policy.


Documentation: https://grelinfo.github.io/grelmicro/

Source Code: https://github.com/grelinfo/grelmicro


Why grelmicro

Stop reinventing the wheel. grelmicro ships microservice patterns as small, composable modules with pluggable backends: locks, rate limits, circuit breakers, cache, logging, health checks, and task scheduling. Async-first, type-safe, and battle-tested in production.

It is built for any Python application that coordinates work across processes, workers, or replicas. The same primitives serve every distributed system, whether you call it microservices, a modular monolith, or a self-contained system. A distributed lock is a distributed lock whether your system is one process or fifty. It fits naturally into cloud-native applications, containerized apps, and Kubernetes deployments.

  • Micro: one focused primitive per module, each a canonical microservice pattern (distributed lock, leader election, rate limiter, circuit breaker, health check API, externalised configuration).
  • Fast: small footprint by design. We keep the layers thin so your code stays quick.
  • Async-first: every I/O call is async / await. Drops into FastAPI, FastStream, and any AnyIO-based stack.
  • Backend-agnostic: each primitive is a protocol. Swap Redis for PostgreSQL or SQLite without touching application code.
  • Railguarded: 100% pytest coverage, ty-checked, ruff-linted, Pydantic-validated. Pre-1.0 API may shift on minor bumps. 1.x commits to standard semver.

Modules

Module Summary
Cache @cached decorator with per-key stampede protection. In-memory TTLCache or RedisCacheBackend.
Synchronization Distributed Lock, TaskLock, LeaderElection. Redis, PostgreSQL, SQLite, Kubernetes, in-memory.
Task Scheduler Periodic task execution with optional distributed locking. Lightweight, not a Celery replacement.
Resilience Circuit Breaker and Rate Limiter with pluggable algorithms (TokenBucketConfig, GCRAConfig).
Logging 12-factor logging with JSON, LOGFMT, TEXT, or PRETTY output, structured error rendering, and OpenTelemetry trace context.
Tracing Unified instrumentation. @instrument creates OpenTelemetry spans and enriches log records with structured context.
Health Health check registry with concurrent runners and FastAPI liveness / readiness integration.
JSON Fast JSON via orjson when available, with automatic fallback to stdlib json.

Installation

pip install grelmicro

See the Installation guide for uv and poetry commands, plus optional extras for Redis, PostgreSQL, SQLite, Kubernetes, OpenTelemetry, and structlog.

Example

FastAPI integration

Create a file main.py with:

import logging
from contextlib import asynccontextmanager

from fastapi import FastAPI, HTTPException, Request

import grelmicro
from grelmicro import cache, resilience, sync
from grelmicro.cache import JsonSerializer, TTLCache, cached
from grelmicro.cache.redis import RedisCacheBackend
from grelmicro.logging import configure_logging
from grelmicro.resilience import (
    CircuitBreaker,
    RateLimitExceededError,
    RateLimiter,
)
from grelmicro.resilience.redis import RedisRateLimiterBackend
from grelmicro.sync import LeaderElection, Lock
from grelmicro.sync.redis import RedisSyncBackend
from grelmicro.task import TaskManager

logger = logging.getLogger(__name__)

# === grelmicro ===
task = TaskManager()
sync.register(RedisSyncBackend("redis://localhost:6379/0"))
cache.register(RedisCacheBackend("redis://localhost:6379/0", prefix="myapp:"))
resilience.register(RedisRateLimiterBackend("redis://localhost:6379/0"))

leader_election = LeaderElection("leader-election")
task.add_task(leader_election)

ttl_cache = TTLCache(ttl=300, serializer=JsonSerializer())


# === FastAPI ===
@asynccontextmanager
async def lifespan(app):
    configure_logging()
    async with grelmicro.lifespan(task):
        yield


app = FastAPI(lifespan=lifespan)


# --- Cache: avoid redundant database queries ---
@cached(ttl_cache)
async def get_user(user_id: int) -> dict:
    return {"id": user_id, "name": "Alice"}


@app.get("/users/{user_id}")
async def read_user(user_id: int):
    return await get_user(user_id)


# --- Circuit Breaker: protect calls to an unreliable service ---
cb = CircuitBreaker("my-service")


@app.get("/")
async def read_root():
    async with cb:
        return {"Hello": "World"}


# --- Rate Limiter: protect endpoints from overload ---
api_limiter = RateLimiter.gcra("api", limit=100, window=60)


@app.get("/api")
async def api_endpoint(request: Request):
    try:
        await api_limiter.acquire_or_raise(key=request.client.host)
    except RateLimitExceededError as exc:
        raise HTTPException(
            status_code=429,
            detail="Too many requests",
            headers={"Retry-After": str(int(exc.retry_after))},
        )
    return {"status": "ok"}


# --- Distributed Lock: synchronize access to a shared resource ---
lock = Lock("shared-resource")


@app.get("/protected")
async def protected():
    async with lock:
        return {"status": "ok"}


# --- Interval Task: run locally on every worker ---
@task.interval(seconds=5)
def heartbeat():
    logger.info("heartbeat")


# --- Distributed Task: run once per interval across all workers ---
@task.interval(seconds=60, max_lock_seconds=300)
def cleanup():
    logger.info("cleanup")


# --- Leader-gated Task: only the leader executes ---
@task.interval(seconds=10, leader=leader_election)
def leader_only_task():
    logger.info("leader task")

License

This project is licensed under the terms of the MIT license.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

grelmicro-0.20.0.tar.gz (495.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

grelmicro-0.20.0-py3-none-any.whl (121.5 kB view details)

Uploaded Python 3

File details

Details for the file grelmicro-0.20.0.tar.gz.

File metadata

  • Download URL: grelmicro-0.20.0.tar.gz
  • Upload date:
  • Size: 495.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.13

File hashes

Hashes for grelmicro-0.20.0.tar.gz
Algorithm Hash digest
SHA256 4a79272e9edf9072d1784b84bdb854160b0fb0148bc084bd526b0c5ebbad1c31
MD5 659352ff4b1a83f049381e66370958f5
BLAKE2b-256 9d532e6463e4a3230c91b5aa45a1cb3ded25bce7c176a3f7e1be54d51f97ddc0

See more details on using hashes here.

Provenance

The following attestation bundles were made for grelmicro-0.20.0.tar.gz:

Publisher: release.yml on grelinfo/grelmicro

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file grelmicro-0.20.0-py3-none-any.whl.

File metadata

  • Download URL: grelmicro-0.20.0-py3-none-any.whl
  • Upload date:
  • Size: 121.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.13

File hashes

Hashes for grelmicro-0.20.0-py3-none-any.whl
Algorithm Hash digest
SHA256 dc8c10b366e56b10c27f0a7e52e29be31b16e863e88fa1ef989138d17bdb6916
MD5 ccb413f2385cbe02bd1da9aa3a1dd893
BLAKE2b-256 90b95e8731773f4eadbf9dc47e891ea7ef50652cf1eb1118f02220dfced4e6c1

See more details on using hashes here.

Provenance

The following attestation bundles were made for grelmicro-0.20.0-py3-none-any.whl:

Publisher: release.yml on grelinfo/grelmicro

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page