Skip to main content

Fast autocomplete for any Python app, no search server required.

Project description

query-autocomplete

Release to PyPI PyPI Python License

Local, typo-tolerant autocomplete without Elasticsearch.

Turn your own text into fast local suggestions with a compact prefix index, fuzzy prefix recovery, and a local Kneser-Ney scorer.

The easiest way to understand it is:

  1. start with one text string in memory
  2. move to the document model when you want stable document IDs
  3. move to a persisted document store when your data needs to live in a database

Why This Exists

Most autocomplete setups eventually turn into infrastructure work: a search server, a hosted index, background sync, operational tuning, and another moving part in your stack.

query-autocomplete is for the cases where you already have the text and want useful suggestions directly inside Python. It builds a local prefix index, handles partial words and common typos, and can keep working from an in-memory index, saved artifact, or SQLite-backed document store.

Use it when you want:

  • fast local suggestions from your own text
  • typo-tolerant prefix autocomplete without a search service
  • a small Python-native autocomplete layer for apps, docs, internal tools, or prototypes
  • an upgrade path from simple in-memory usage to persisted SQLite storage

It is probably not the right tool when you need:

  • distributed search across many machines
  • complex boolean filtering, faceting, or full-text ranking
  • hosted multi-tenant search infrastructure
  • semantic/vector search as the primary retrieval model

Install

pip install query-autocomplete

PDF and DOCX readers are included in the base install. Optional chunking support is available for pysbd sentence segmentation:

pip install "query-autocomplete[chunking]"

Basic Usage

Start with one text object and get suggestions back.

The text can be short or very long. A Document can be a phrase, a page, a transcript, or something closer to book length. The tiny examples here are just for readability.

from query_autocomplete import Autocomplete, Document

index = Autocomplete.create([
    Document(text="how to build a deck"),
])

print(index.suggest("how to bui", topk=5))
print(index.suggest("how to biuld", topk=5))

That is the core experience: give it text, create an in-memory autocomplete, ask for suggestions.

Slightly Bigger In-Memory Example

Once you want better results, just add more text or bigger documents.

from query_autocomplete import Autocomplete, Document

index = Autocomplete.create([
    Document(text="how to build a deck"),
    Document(text="how to build a desk"),
    Document(text="how to build with python"),
])

print(index.suggest("how to bui", topk=5))
print(index.suggest("how to build ", topk=5))

This is still the simplest mode and the best place to begin.

Realistic Examples

Search Box Suggestions

from query_autocomplete import Autocomplete, Document

index = Autocomplete.create([
    Document(text="wireless mechanical keyboard"),
    Document(text="wireless mouse for laptop"),
    Document(text="usb c docking station"),
    Document(text="noise cancelling headphones"),
])

def suggest_products(user_input: str) -> list[str]:
    return index.suggest(user_input, topk=5)

Documentation Autocomplete

from query_autocomplete import Autocomplete, Document

index = Autocomplete.create([
    Document(text="install query-autocomplete with pip"),
    Document(text="create an in-memory autocomplete index"),
    Document(text="save and load compiled autocomplete artifacts"),
    Document(text="use AdaptiveStore with SQLite persistence"),
])

print(index.suggest("use adap", topk=5))

Command Palette Suggestions

from query_autocomplete import Autocomplete, Document

commands = [
    Document(text="open settings"),
    Document(text="open keyboard shortcuts"),
    Document(text="create new project"),
    Document(text="clear recent files"),
    Document(text="toggle dark mode"),
]

palette = Autocomplete.create(commands, quality_profile="code_or_logs")
print(palette.suggest("open key", topk=3))

The Document Model

The real unit in the library is a Document.

from query_autocomplete import Document

doc = Document(
    text="how to build with python",
    doc_id="doc-123",
    metadata={"source": "docs"},
)

Fields:

  • text The raw text used for learning suggestions.

  • doc_id Optional stable identifier for the document.

  • metadata Optional JSON-like metadata kept on in-memory document objects.

For the basic in-memory flow, you usually do not need doc_id.

For persisted mutable stores, doc_id becomes important because it is the public document identity used for document management.

Document.text does not need to be short. It can be a single query-like phrase, a paragraph, a full article, a long transcript, or very large source text. The system is designed to adapt to mixed short and long documents in the same store.

One document can contain multiple lines. Internally, the library may split those lines for training.

Quality Profiles

The default profile is balanced. It turns on conservative production-quality behavior such as context-aware scoring, typo-tolerant prefix lookup, and prefix-ladder collapse.

from query_autocomplete import Autocomplete, Document

index = Autocomplete.create(
    [
        Document(text="how to build a deck"),
        Document(text="how to build a desk"),
        Document(text="how to build with python"),
    ],
    quality_profile="precision",
    max_generated_words=4,
    phrase_min_count=3,
)

Available profiles:

  • balanced The default. A conservative mix of fuzzy recall, quality filtering, and clean top results.

  • precision Stricter phrase mining and stronger runtime penalties for cleaner top results.

  • recall Keeps more candidates and disables prefix-ladder collapse by default. Fuzzy prefix lookup remains enabled.

  • code_or_logs Keeps structured tokens and code/log-like continuations more readily.

  • natural_language Uses stricter phrase and diversity behavior for prose-like document collections.

Explicit BuildConfig and SuggestConfig objects override profile defaults.

Use inspect() when debugging ranking. For partial-token queries, diagnostics include prefix_match, which reports the typed fragment, the matched indexed prefix, edit distance, and whether fuzzy recovery was used.

Inspecting Rankings

Use inspect(...) when you want to understand why suggestions ranked the way they did.

diagnostics = index.inspect("how to bui", topk=3)

for item in diagnostics:
    print(item.text, item.score)
    print(item.breakdown)
    print(item.expansion_trace)

Each diagnostic includes:

  • final score
  • prior score from prefix/context evidence
  • local scorer score
  • structural noise penalty
  • context support ratio and penalty
  • length adjustment
  • diversity group key
  • token or phrase expansion trace

suggest(...) still returns plain list[str]; diagnostics are only returned by inspect(...).

Persistence Helpers

If you want to keep a compiled autocomplete index around and load it later, you can save it as an artifact. This is a persistence helper, not the main Autocomplete mental model.

from query_autocomplete import Autocomplete, Document

index = Autocomplete.create([
    Document(text="how to build a deck"),
    Document(text="how to build a desk"),
])

index.save("my-index")

loaded = Autocomplete.load("my-index")
print(loaded.suggest("how to bui", topk=5))

You can also create and save in one step:

from query_autocomplete import Autocomplete, Document

Autocomplete.create(
    [
        Document(text="how to build a deck"),
        Document(text="how to build a desk"),
    ],
).save("my-index")

Path rules:

  • index.save() Auto-creates a managed folder under .query_autocomplete_artifacts/

  • index.save("docs-v1") Saves to .query_autocomplete_artifacts/docs-v1/

  • index.save("artifacts/docs-v1") Saves to that explicit relative path

  • Autocomplete.load("docs-v1") Loads from the managed artifact folder

This is persistence for a compiled serving artifact, not a mutable document database.

Database Model

When your document collection needs to change over time, move to AdaptiveStore.

This is the database-backed model:

  • one SQLite database is one document collection
  • documents can be added and deleted over time
  • the serving index is rebuilt from stored source documents
  • doc_id is the public identity for document management

SQL-Compatible Database

For a proper persisted mutable document collection, use the SQL-compatible store.

from query_autocomplete import AdaptiveStore, Document

store = AdaptiveStore.open("sqlite:///adaptive.sqlite3")

store.add_documents([
    Document(text="how to build a deck", doc_id="deck"),
    Document(text="how to build with python", doc_id="python"),
])

print(store.suggest("how to bui", topk=5))

Supported store URLs today:

  • sqlite:///adaptive.sqlite3
  • sqlite:////absolute/path/adaptive.sqlite3
  • a plain path like "./adaptive.sqlite3"

Each adaptive SQLite database owns one document collection. Name the database file however you want; the documents and current serving index live inside that file.

Adaptive SQL persistence is SQL-first:

  • source documents are stored in SQLite
  • the compiled serving index cache is also stored in SQLite
  • normal adaptive usage does not write .query_autocomplete_artifacts

Working With Mutable Stores

Ingest documents

store.add_documents([
    Document(text="how to build a deck", doc_id="deck"),
    Document(text="how to build a desk", doc_id="desk"),
])

Rules for adaptive mutable stores:

  • doc_id is optional on input and auto-generated when missing
  • doc_id must be unique within the database
  • document content must also be unique within the database

So these are both rejected inside one database:

  • same doc_id with different content
  • same content with a different doc_id

Ingesting documents automatically invalidates the serving cache, which is rebuilt on demand the next time you query.

Delete a document

In adaptive stores, doc_id is the public document identity.

store.remove_document("deck")

List documents

print(store.list_documents())

Open an existing store

store = AdaptiveStore.open("sqlite:///adaptive.sqlite3")

Clear a store

store.clear()

store.delete() is kept as a backwards-compatible alias for clear(). It clears the adaptive database tables but does not remove the SQLite file.

Migrate between SQL stores

store = AdaptiveStore.open("sqlite:///adaptive.sqlite3")
copied = store.migrate("sqlite:///adaptive-copy.sqlite3")

Reuse a custom serving profile

from query_autocomplete.config import SuggestConfig

autocomplete = store.with_suggest_config(SuggestConfig(default_top_k=3))
print(autocomplete.suggest("how to bui"))

AdaptiveAutocomplete also supports inspect(...) with the same diagnostics as the in-memory engine:

for item in autocomplete.inspect("how to bui", topk=3):
    print(item.text, item.breakdown.final_score)

Upgrade Path

You can export a live in-memory autocomplete into an adaptive store:

from query_autocomplete import AdaptiveStore, Autocomplete, Document

engine = Autocomplete.create([
    Document(text="how to build a deck"),
])

store = AdaptiveStore.import_autocomplete(
    "sqlite:///adaptive.sqlite3",
    engine=engine,
)

You can also export the source documents directly:

store = AdaptiveStore.open("sqlite:///adaptive.sqlite3")
store.add_documents(engine.export_documents())

An autocomplete loaded from Autocomplete.load(...) cannot be imported into an adaptive store, because artifact files are for serving and do not retain the full source-document provenance needed for mutable retraining.

Config

You usually do not need to touch config first, but when you do:

  • BuildConfig Controls index construction and compilation behavior for AdaptiveStore

  • SuggestConfig Controls serving behavior for store.with_suggest_config(...)

Example:

from query_autocomplete import Autocomplete, Document
from query_autocomplete.config import BuildConfig, NormalizationConfig, SuggestConfig

build_config = BuildConfig(
    max_generated_words=4,
    max_indexed_prefix_chars=24,
    max_context_tokens=3,
    top_tokens_per_prefix=64,
    top_next_tokens=32,
    top_next_phrases=16,
    phrase_min_count=2,
    phrase_min_doc_freq=1,
    phrase_min_pmi=0.0,
    phrase_max_dominant_extension_ratio=0.95,
    phrase_boundary_generic_min_count=8,
    phrase_max_len=4,
    normalization=NormalizationConfig(
        lowercase=True,
        unicode_nfkc=True,
        strip_accents=False,
        strip_punctuation=True,
    ),
)

suggest_config = SuggestConfig(
    default_top_k=10,
    default_length_bias=0.5,
    max_suggestion_words=4,
    beam_width=24,
    token_branch_limit=8,
    phrase_branch_limit=8,
    prior_weight=0.35,
    noise_penalty_weight=0.35,
    suppress_redundant_continuations=True,
    min_context_support_ratio=0.0,
    context_support_penalty_weight=0.25,
    collapse_prefix_ladders=True,
    collapse_prefix_ladder_strategy="best",
    unknown_context_strategy="skip",
    normalize_phrase_scores_by_length=False,
    fuzzy_prefix="auto",
    max_edit_distance=2,
)

index = Autocomplete.create(
    [Document(text="how to build a deck")],
    build_config=build_config,
    suggest_config=suggest_config,
)

Most useful knobs:

  • BuildConfig.max_generated_words
  • BuildConfig.max_context_tokens Defaults to 3; values up to 6 are supported. Higher values are rejected because the binary context graph stores at most six-token history keys.
  • BuildConfig.phrase_min_count
  • BuildConfig.phrase_min_doc_freq
  • BuildConfig.phrase_min_pmi
  • SuggestConfig.default_top_k
  • SuggestConfig.max_suggestion_words
  • SuggestConfig.default_length_bias
  • SuggestConfig.context_support_penalty_weight
  • SuggestConfig.collapse_prefix_ladders
  • SuggestConfig.collapse_prefix_ladder_strategy
  • SuggestConfig.unknown_context_strategy
  • SuggestConfig.normalize_phrase_scores_by_length
  • SuggestConfig.fuzzy_prefix Defaults to "auto": exact prefix lookup is tried first, then bounded fuzzy lookup recovers common one-edit typos on non-trivial fragments.
  • SuggestConfig.max_edit_distance Defaults to 2; serving may use a lower effective distance for short fragments to avoid noisy autocomplete matches.

Phrase quality options are build-time settings. Changing them requires rebuilding the index or adaptive serving artifact.

Runtime quality options are serving-time settings. You can override them per call:

results = index.suggest(
    "how to build ",
    collapse_prefix_ladders=False,
)

collapse_prefix_ladders removes near-duplicate suggestions where one result is just a longer continuation of another. For example, instead of returning all of how to build, how to build a, and how to build a deck, the default keeps one representative according to collapse_prefix_ladder_strategy.

Candidate fluency is scored locally with an interpolated Kneser-Ney bigram model built from the indexed corpus. This keeps serving lightweight while giving better contextual preferences than simple add-k smoothing.

Rerankers are request-time behavior:

results = index.suggest("how to build ", reranker=my_reranker)
diagnostics = index.inspect("how to build ", reranker=my_reranker)

If a request asks for longer continuations than the index was built for, the library emits a warning. For example, an index built with max_generated_words=4 warns when called with suggest(..., max_words=5).

The same warning behavior applies when serving asks for artifact detail that was not stored at build time: a partial query fragment longer than BuildConfig.max_indexed_prefix_chars, or SuggestConfig.token_branch_limit / phrase_branch_limit values larger than BuildConfig.top_next_tokens / top_next_phrases.

Repository Note

  • The published package is built from python-package/
  • The importable library source lives in core/src/query_autocomplete/

Third-Party Licensing

  • This package is MIT-licensed.
  • It depends on marisa-trie, whose current published licensing is MIT AND (BSD-2-Clause OR LGPL-2.1-or-later).
  • See THIRD_PARTY_LICENSES.md for a short note and links to upstream metadata.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

query_autocomplete-0.1.1.tar.gz (49.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

query_autocomplete-0.1.1-py3-none-any.whl (55.3 kB view details)

Uploaded Python 3

File details

Details for the file query_autocomplete-0.1.1.tar.gz.

File metadata

  • Download URL: query_autocomplete-0.1.1.tar.gz
  • Upload date:
  • Size: 49.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for query_autocomplete-0.1.1.tar.gz
Algorithm Hash digest
SHA256 8d635984d0e5e855dda2915f3c63b41071d7ae7eadf57c3ffc65a008e7292bba
MD5 ec26d595cbdb7ba0f966fffc751c5ae7
BLAKE2b-256 af3cf3a6a8191ed078ce1319767927c84142b9969e3f7db524b30f831a434fd7

See more details on using hashes here.

Provenance

The following attestation bundles were made for query_autocomplete-0.1.1.tar.gz:

Publisher: release.yml on MarcellM01/query-autocomplete

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file query_autocomplete-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for query_autocomplete-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 c74b843b5bc1c12837138a81ee66c09f4975c21f293b51f009213932820a85d1
MD5 b3cfec73d2a569527002ca93b3da203c
BLAKE2b-256 d051d61fef86dc84c352c98d48ac8c35461594bb83ece168eeb93b157e55d04b

See more details on using hashes here.

Provenance

The following attestation bundles were made for query_autocomplete-0.1.1-py3-none-any.whl:

Publisher: release.yml on MarcellM01/query-autocomplete

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page