Skip to main content

Declarative Document Indexing (DDI) Schemas for RAG — LLM-powered pre-indexing and hybrid retrieval.

Project description

Ennoia

CI coverage PyPI Python License types: pyright strict Ruff

Ennoia is a Python framework for Declarative Document Indexing (DDI) — you describe what to extract from each document with typed Python schemas, and Ennoia runs the LLM extraction, persists the structured output, and serves hybrid filter + vector search on top. The same schemas drive a Python SDK, a CLI, a REST API, and an MCP tool surface.

Benchmark — Ennoia vs LangChain on ESCI products

The chart above is one run of the ESCI product-discovery benchmark: 1,000 sampled Amazon products and ~2,000 LLM-generated shopper queries split across two detail bands — poorly formed (vague, no brand or attributes) and richly detailed (brand + concrete features). Ennoia's DDI agent loop leads on every metric for poorly formed queries, where naive embeddings have no lexical anchor to latch onto; on richly detailed queries it reaches parity with LangChain's textbook one-embedding-per-product baseline. The corpus was deliberately chosen so each product fits a single embedding chunk, removing chunked-RAG's failure mode and biasing the comparison against Ennoia — on chunked corpora the gap widens further, since LangChain here is searching the full original document text directly without shredding them into a pieces. See the cookbook page for full methodology and limitations.

Status: alpha (0.4.x). API is converging but may still shift.


Table of contents


Installation

Install

pip install "ennoia[all]"

all pulls every adapter, store, the CLI, and the server. To stay slim, pick the extras you actually need:

Extra Adds
openai, anthropic, openrouter Hosted LLM adapters
ollama Local LLM adapter
sentence-transformers Local embedding adapter
filesystem Parquet + NumPy persistent stores
qdrant Qdrant vector + hybrid store
pgvector PostgreSQL + pgvector hybrid store
cli ennoia command
server FastAPI REST + FastMCP
all Everything above

The minimal zero-key local stack is pip install "ennoia[ollama,sentence-transformers,cli]".

Config

Generate a project-local config so you do not have to repeat --llm, --embedding, --store, and API keys on every invocation:

# Writes ./ennoia.ini with sensible defaults.
ennoia init

The [ennoia] section maps 1:1 to CLI flags; the [env] section preloads provider API keys via os.environ.setdefault before adapters initialize. Explicit CLI flags still win. Full key map in docs/cli.md.

Scaffold a first-draft schema from a sample document:

ennoia craft ./contracts/sample.txt \
  --output my_extractors.py \
  --llm openai:gpt-4o-mini \
  --task "filter by governing law, effective date, and contract value"

craft writes a fixed three-class skeleton (Metadata for filterable fields, QuestionAnswer for ten Q&A pairs, Summary for a one-line overview). It is a prototype starting point, not production output — review every field, tighten str to Literal[...] once you know the value set, and re-run craft against more samples to refine. Full review checklist in docs/cli.md.

Then drive the index → try → search loop:

ennoia index ./contracts --schema my_extractors.py --store ./contracts_index
ennoia try ./contracts/sample.txt --schema my_extractors.py
ennoia search "payment obligations and late fees" \
  --schema my_extractors.py --store ./contracts_index \
  --filter "governing_law=Delaware" --top-k 5

Getting started

Introduction

Ennoia ships a small set of tools for designing typed schemas, validating them on real documents, and running a fully working RAG pipeline in minutes — without hand-rolling extraction prompts, chunking heuristics, or a glue layer between your filter store and your vector store.

A DDI Schema is a Python class that describes what to pull from each document. The class docstring is the extractor-level prompt; each Annotated[..., Field(description=...)] field is the per-field prompt that rides on the JSON schema sent to the LLM. Three kinds cover the common cases:

  • BaseStructure — typed scalars / enums you want to filter on (dates, jurisdictions, prices, categories). One row per document.
  • BaseSemantic — free-form text aimed at the vector index ("summarize the holding"). One embedding per document.
  • BaseCollection — repeated entities pulled out of the document (parties, citations, bullet-point features). One row per entity, each independently filterable and searchable.

A Pipeline wires schemas to an LLM adapter, an embedding adapter, and a Store (structured + vector). At index time it runs every schema against the document, executes the optional extend() branches that fan out into deeper extractors, and writes the structured output. At query time it does two-phase retrieval: structural filters narrow the candidate set, and the vector index ranks within. Filter-only is allowed; vector-only is allowed; the common case is both.

Why this beats naive chunk-and-embed: filterable structure (dates, parties, categories) is preserved instead of being shredded into chunks, and vague queries (the broad band in the benchmark above) can still pivot on a structural filter when there is no lexical anchor for similarity search.

A 15-line minimal example:

from typing import Literal
from ennoia import BaseSemantic, BaseStructure, Pipeline, Store
from ennoia.adapters.embedding.sentence_transformers import SentenceTransformerEmbedding
from ennoia.adapters.llm.ollama import OllamaAdapter
from ennoia.store import InMemoryStructuredStore, InMemoryVectorStore


class DocMeta(BaseStructure):
    """Extract basic document metadata."""
    category: Literal["legal", "medical", "financial"]


class Summary(BaseSemantic):
    """What is the main topic of this document?"""


pipeline = Pipeline(
    schemas=[DocMeta, Summary],
    store=Store(vector=InMemoryVectorStore(), structured=InMemoryStructuredStore()),
    llm=OllamaAdapter(model="qwen3:0.6b"),
    embedding=SentenceTransformerEmbedding(model="all-MiniLM-L6-v2"),
)
pipeline.index(text="The court held employers must accommodate disabilities.", source_id="d1")
hits = pipeline.search(query="court holdings on liability", filters={"category": "legal"}, top_k=5)

The full contract-extraction example (with extend() branching, a BaseCollection, and a hybrid filter+vector search) lives at examples/04_extend_branching.py; a smaller two-extractor walkthrough is at examples/01_getting_started.py.

When to reach for Ennoia:

  • You need hybrid filter + vector search on the same corpus — not one or the other.
  • Your documents carry structure (dates, parties, categories, clauses) that plain chunking discards.
  • You want the LLM pre-processing step to be typed and visible in your diff, not buried in a prompt string at runtime.

Schema autodiscover and debug

Two CLI commands close the loop between writing a schema and trusting it:

  • ennoia craft <doc> --output schemas.py --llm <uri> --task "<retrieval goal>" — points an LLM at a sample document and a retrieval task, then writes a ready-to-edit schema module. If the file already exists, the LLM improves the existing schema instead of rewriting it from scratch — so you can run craft against several representative documents in turn to expand enum sets, add fields the first sample missed, and refine docstrings. The output is a fixed three-class skeleton (Metadata / QuestionAnswer / Summary) using only plain scalar types, by design — craft deliberately avoids Literal enums, Optional, list, and extend() branches it cannot infer from one example. It is a starting point for human editing, not a production schema: tighten types, drop or rename fields, and read every docstring before running ennoia index on real data.

  • ennoia try <doc> --schema schemas.py — runs a single extraction pass and prints the fields, per-extraction confidence, and extend() chain. Nothing is written to any store, so it is the fast loop while iterating on schemas. Output is colorised when stdout is a TTY.

The full review checklist for craft output (when to switch strLiteral, which fields belong in Metadata vs QuestionAnswer, how to write corpus-generic docstrings) is in docs/cli.md.

Quick setup

Once you have an ennoia.ini and a schema file, four commands run a complete pipeline end-to-end:

# Index a folder. --store accepts a filesystem path, qdrant:<collection>, or pgvector:<collection>.
ennoia index ./contracts --schema my_extractors.py --store ./contracts_index --collection contracts

# Hybrid search from the shell.
ennoia search "payment obligations and late fees" \
  --filter "governing_law=Delaware" --filter "effective_date__gte=2025-01-01" --top-k 5

# Boot a FastAPI REST server: /discover, /filter, /search, /retrieve, /index, /delete.
ennoia api --store ./contracts_index --schema my_extractors.py --api-key "$ENNOIA_API_KEY"

# Boot an MCP tool server (sse / stdio / http) for agents: discover_schema, filter, search, retrieve.
ennoia mcp --store ./contracts_index --schema my_extractors.py --transport sse

Both servers reuse the same store and schema you indexed against — no duplicated config, no schema drift between offline indexing and online serving. Server flags, auth, and the full endpoint table are in docs/serve.md; a worked agent-loop example using the MCP surface is in docs/cookbook/mcp-agent.md.


Documentation


Contribution

Issues, bug reports, and pull requests are welcome at github.com/vunone/ennoia/issues.

Local dev loop:

uv sync --all-extras
uv run pytest                # 100% branch coverage is the bar
uv run pyright               # strict mode is non-negotiable
uv run ruff check
uv run ruff format --check

When sending a PR, keep the diff focused, add tests for new branches, and update the relevant doc page in docs/ — the docs and the README are the public contract.


Benchmark

The chart at the top of this README is generated by benchmark/runner.py. Methodology, query bands, expected shape of results, and limitations are documented in docs/cookbook/product-discovery-benchmark.md; the harness itself, with reproduction commands and CLI flags, lives in benchmark/README.md.


License

Apache 2.0. See LICENSE.txt and NOTICE.

Contributors

Sponsored by: Achiv :: Smart Market Research

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ennoia-0.4.0.tar.gz (657.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ennoia-0.4.0-py3-none-any.whl (125.6 kB view details)

Uploaded Python 3

File details

Details for the file ennoia-0.4.0.tar.gz.

File metadata

  • Download URL: ennoia-0.4.0.tar.gz
  • Upload date:
  • Size: 657.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for ennoia-0.4.0.tar.gz
Algorithm Hash digest
SHA256 2a77ca2e74a38224243ba766432cf783c3e7bf9b2acb4f6fcbc4f0796cc88f10
MD5 10571d59eccce9270ac80d4a94618771
BLAKE2b-256 a8cb5201dba1443c7bb5771489857a73ce79e6b9b4959d70d76c337f82a3f378

See more details on using hashes here.

Provenance

The following attestation bundles were made for ennoia-0.4.0.tar.gz:

Publisher: release.yml on vunone/ennoia

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ennoia-0.4.0-py3-none-any.whl.

File metadata

  • Download URL: ennoia-0.4.0-py3-none-any.whl
  • Upload date:
  • Size: 125.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for ennoia-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e8f30b0d3fbeb47a88742b394ae88ef6d19178df6ed1dc86fbca6bbebd8706d1
MD5 db78055c0f9f0380d5e8ce47e621465f
BLAKE2b-256 552edf578396752d1e811b8020e91d7ee5bd326d5675237c79221e5de5f5a831

See more details on using hashes here.

Provenance

The following attestation bundles were made for ennoia-0.4.0-py3-none-any.whl:

Publisher: release.yml on vunone/ennoia

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page