Skip to main content

Declarative Document Indexing (DDI) Schemas for RAG — LLM-powered pre-indexing and hybrid retrieval.

Project description

Ennoia

CI coverage PyPI Python License types: pyright strict Ruff

Ennoia introduces Declarative Document Indexing Schemas (DDI Schemas) for RAG — a new pre-indexing approach where LLM-powered extraction is defined through schemas and executed before documents enter any store, replacing naive chunk-and-embed with structured, queryable indices.

Traditional RAG is like feeding your documents through a shredder and then trying to answer questions by pulling out strips of paper one by one.

Ennoia is like reading each document first, taking structured notes on what matters, and then searching your notes — while keeping the originals on the shelf.

Install

pip install "ennoia[ollama,sentence-transformers,cli]"

Available extras: ollama, openai, anthropic, sentence-transformers, filesystem (Parquet + NumPy stores), cli (ennoia CLI), qdrant (Qdrant vector + hybrid stores), pgvector (PostgreSQL + pgvector hybrid store), server (FastAPI REST + FastMCP), docs (mkdocs-material site), all (everything above).

Quick start (SDK)

from datetime import date
from typing import Literal

from ennoia import BaseSemantic, BaseStructure, Pipeline, Store
from ennoia.adapters.embedding.sentence_transformers import SentenceTransformerEmbedding
from ennoia.adapters.llm.ollama import OllamaAdapter
from ennoia.store import InMemoryStructuredStore, InMemoryVectorStore


# DDI Schema #1 — structured extraction. Field types drive filter
# operators automatically (Literal → eq/in, date → range ops); the
# docstring is the LLM prompt.
class DocMeta(BaseStructure):
    """Extract basic document metadata."""

    category: Literal["legal", "medical", "financial"]
    doc_date: date


# DDI Schema #2 — semantic extraction. The docstring is the question the
# LLM answers; the answer is embedded for vector search.
class Summary(BaseSemantic):
    """What is the main topic of this document?"""


# Configure the pipeline: schemas + a two-phase store (structured filter
# → vector search) + LLM and embedding adapters.
pipeline = Pipeline(
    schemas=[DocMeta, Summary],
    store=Store(vector=InMemoryVectorStore(), structured=InMemoryStructuredStore()),
    llm=OllamaAdapter(model="qwen3:0.6b"),
    embedding=SentenceTransformerEmbedding(model="all-MiniLM-L6-v2"),
)

# Pre-indexing: every schema runs against the document once, before writing
# structured fields to the structured store and embedded answers to the
# vector store — before any query touches them.
pipeline.index(text="The court held that...", source_id="doc_001")

# Hybrid search: `filters` narrows candidates via the structured store,
# then vector similarity ranks within that subset.
results = pipeline.search(
    query="court holdings on liability",
    filters={"category": "legal"},
    top_k=5,
)

See docs/quickstart.md for the full walkthrough.

Quick start (CLI)

# Iterate on a schema against a single document
ennoia try ./sample.txt --schema my_schemas.py

# Index a folder into a filesystem-backed store
ennoia index ./docs \
  --schema my_schemas.py \
  --store ./my_index \
  --collection cases \
  --llm ollama:qwen3:0.6b \
  --embedding sentence-transformers:all-MiniLM-L6-v2

# …or into a production Qdrant / pgvector backend
ennoia index ./docs \
  --schema my_schemas.py \
  --store qdrant:cases \
  --qdrant-url http://localhost:6333 \
  --llm openai:gpt-4o-mini \
  --embedding openai-embedding:text-embedding-3-small

# Hybrid search
ennoia search "employer duty to accommodate disability" \
  --schema my_schemas.py \
  --store ./my_index \
  --collection cases \
  --filter "jurisdiction=WA" \
  --filter "date_decided__gte=2020-01-01" \
  --top-k 5

See docs/cli.md.

Serve an index (REST + MCP)

Stage 3 ships two remote interfaces. Both accept the same --store prefix scheme (filesystem path, qdrant:<collection>, or pgvector:<collection>) as ennoia index:

# REST — full CRUD for application integration.
export ENNOIA_API_KEY=sekret
ennoia api --store ./my_index --schema my_schemas.py --port 8080

# MCP — read-only tools (discover_schema, filter, search, retrieve) for agents,
# pointed at a production Qdrant collection.
export ENNOIA_QDRANT_URL=http://localhost:6333
ennoia mcp --store qdrant:cases --schema my_schemas.py --transport sse --port 8090

Agents consume the MCP flow discover_schema → filter → search(filter_ids=...) → retrieve out of the box. See docs/serve.md.

Documentation

License

Apache 2.0. See LICENSE.txt and NOTICE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ennoia-0.3.0.tar.gz (345.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ennoia-0.3.0-py3-none-any.whl (93.9 kB view details)

Uploaded Python 3

File details

Details for the file ennoia-0.3.0.tar.gz.

File metadata

  • Download URL: ennoia-0.3.0.tar.gz
  • Upload date:
  • Size: 345.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for ennoia-0.3.0.tar.gz
Algorithm Hash digest
SHA256 3f869a5c4662f2094e3a5877bef8f4803695e2b1742b927c9ca0bad645e36fae
MD5 fa5c39a7c7c318ab78a048b9b98c5e26
BLAKE2b-256 eb37ba77bc9b0d9458a5a02b02ed1b4e3c1d714148f977fa53e75a71388f1622

See more details on using hashes here.

Provenance

The following attestation bundles were made for ennoia-0.3.0.tar.gz:

Publisher: release.yml on vunone/ennoia

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ennoia-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: ennoia-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 93.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for ennoia-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 af4bb09892e853e1284f12b6d2cbc0502c24206d56a7471c20f23e5495fdb8fe
MD5 1fe65b14665c65e1d88eaac2b3fa79cb
BLAKE2b-256 c47857f43cc6acceeaa7b99b5a58ae5dc58926cda1984c2f1f34a77ccf868e0b

See more details on using hashes here.

Provenance

The following attestation bundles were made for ennoia-0.3.0-py3-none-any.whl:

Publisher: release.yml on vunone/ennoia

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page