Skip to main content

Automatic class registration and config typing stub generation for layered Python architectures

Project description

Conscribe

Inheritance is registration. __init__ signature is config schema.

Conscribe is a Python library that provides automatic class registration and config typing stub generation for layered architectures. It eliminates two categories of boilerplate:

  1. Manual registration — Write a class, inherit a base → it's registered. No registry["foo"] = FooClass.
  2. Config guesswork — Your __init__ parameters become the config schema. IDE autocomplete and fail-fast validation come for free.
pip install conscribe

Requires Python >= 3.9. Built on Pydantic v2.


Who Is This For?

Framework developers building config-driven systems with pluggable layers (agents, LLM providers, browser backends, etc.) who need registries, factories, protocol checks, and config schemas — without N layers × boilerplate.

Framework users who write YAML configs and want IDE autocomplete, parameter docs, and fail-fast validation at startup instead of crashing N steps in.


Quick Start

1. Define a Layer

from typing import Protocol, runtime_checkable
from conscribe import create_registrar

@runtime_checkable
class ChatModelProtocol(Protocol):
    def chat(self, messages: list[dict]) -> str: ...

LLMRegistrar = create_registrar(
    "llm",
    ChatModelProtocol,
    discriminator_field="provider",
    strip_prefixes=["Chat"],
)

2. Create a Base Class

class ChatBaseModel(metaclass=LLMRegistrar.Meta):
    __abstract__ = True

3. Write Implementations (auto-registered)

class ChatOpenAI(ChatBaseModel):
    """OpenAI LLM provider.

    Args:
        model_id: Model identifier, e.g. gpt-4o
        temperature: Sampling temperature, 0-2
    """
    def __init__(self, *, model_id: str, temperature: float = 0.0):
        self.model_id = model_id
        self.temperature = temperature

    def chat(self, messages): ...

# Registered as "open_ai". No decorator. No registry call.

4. Discover & Use

from conscribe import discover

discover("my_app.llm.providers")

llm_cls = LLMRegistrar.get("open_ai")   # → ChatOpenAI
llm = llm_cls(model_id="gpt-4o")
print(LLMRegistrar.keys())              # ["open_ai", "anthropic", ...]

Config Typing

Your __init__ signature is the config schema. Conscribe extracts it, builds a Pydantic discriminated union, and generates stubs for IDE autocomplete:

from conscribe import build_layer_config, generate_layer_config_source

result = build_layer_config(LLMRegistrar)
source = generate_layer_config_source(result)

See the Config Typing Guide for full details.

Config Tiers

Tier What You Write What Users Get
1 Plain __init__(self, *, x: int = 5) Names + types + defaults
1.5 + Google/NumPy docstring with Args: + descriptions
2 + Annotated[int, Field(ge=0)] + constraints
3 __config_schema__ = MyModel Full Pydantic model

API Reference

Registration

API Purpose
create_registrar(name, protocol, ...) Create a layer registrar (recommended entry point)
Registrar.get(key) Look up a registered class
Registrar.keys() List all registered keys
Registrar.bridge(external_cls) Create bridge for external class
Registrar.register(key) Manual registration decorator
discover(*package_paths) Import modules to trigger registration

Config Typing

API Purpose
extract_config_schema(cls, mro_scope, mro_depth) Extract Pydantic model from __init__
build_layer_config(registrar) Build discriminated union for a layer
generate_layer_config_source(result) Generate Python stub source code
generate_layer_json_schema(result) Generate JSON Schema
compute_registry_fingerprint(registrar) Compute registry fingerprint hash

Design Principles

  • Zero registration burden — Inherit a base class = registered
  • __init__ is the single source of truth — No duplicate config definitions
  • Fail-fast — Duplicate keys raise immediately; invalid config rejects at startup
  • Domain-agnostic — Pure infrastructure, knows nothing about agents or LLMs
  • Stubs and runtime are separate — Stale stubs don't affect correctness

Documentation

Full documentation is shipped inside the package (accessible at site-packages/conscribe/) and browsable on GitHub:

Document Description
llms.txt AI entry point — package summary and navigation
docs/overview.md Core concepts and architecture
docs/guide-alice.md Tutorial: building a framework with conscribe
docs/guide-bob.md Tutorial: consuming a conscribe-based framework
docs/api-reference.md Full API signatures and examples
docs/recipes.md Task-oriented "how do I X?"
docs/registration.md Registration subsystem internals
docs/config-typing.md Config typing pipeline internals
docs/mro-and-degradation.md MRO chains and type degradation
docs/cli.md CLI reference

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

conscribe-0.4.1.tar.gz (195.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

conscribe-0.4.1-py3-none-any.whl (66.5 kB view details)

Uploaded Python 3

File details

Details for the file conscribe-0.4.1.tar.gz.

File metadata

  • Download URL: conscribe-0.4.1.tar.gz
  • Upload date:
  • Size: 195.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.9

File hashes

Hashes for conscribe-0.4.1.tar.gz
Algorithm Hash digest
SHA256 bd070417be9f757a34e75d1cc441e4a59c28010e58a9a40a9d737d94758e1845
MD5 85a57cf322da2f8ce72c64c232b7bc51
BLAKE2b-256 b15d5242b1a646a25947a890baa4e4cbda284036999a71fe864bc27d0d15347f

See more details on using hashes here.

File details

Details for the file conscribe-0.4.1-py3-none-any.whl.

File metadata

  • Download URL: conscribe-0.4.1-py3-none-any.whl
  • Upload date:
  • Size: 66.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.9

File hashes

Hashes for conscribe-0.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 9d93f678deaae0c232c0b1539b8915c42de2773d852a40ec68b1e56f23c2c1ac
MD5 88ac522ee836b8f3344a5153d26f8c75
BLAKE2b-256 f50ad371cda8fbebcc2827a507c9a85e263d8d6770ca0bb9396aa6562150ee17

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page