Skip to main content

Canonicalize verbose text strings into clean, deduplicated groups using BERTopic + LLM

Project description

nobs-canonicalize

Canonicalize verbose text strings into clean, deduplicated canonical groups using embeddings + LLM reasoning.

[!CAUTION] This library is in early development. It is not ready for production use.

Given a list of noisy, verbose text strings (e.g. medical interventions, product names, user inputs), this library:

  1. Clusters similar strings using BERTopic or FAISS+Leiden
  2. Names each cluster with a clean canonical label via LLM (o3-mini)
  3. Classifies outliers into the named groups, reducing ungrouped items

How it works

graph TD;
    A[Surface strings] --> B((1. Cluster<br>BERTopic or FAISS+Leiden));
    B -->|groups| C((2. Name groups<br>via LLM));
    B -->|outliers| D((3. Classify outliers<br>into groups));
    C -->|canonical labels| D;
    D --> E[Canonical Concepts];
  1. Cluster — Groups surface strings using text-embedding-3-large embeddings
  2. Nameo3-mini generates a clean canonical label for each group
  3. Classify outlierso3-mini assigns ungrouped strings into the named groups
  4. Output — deduplicated canonical concepts ready for downstream use

Clustering backends

Two clustering backends are available, selectable via the backend parameter:

BERTopic (default)

Uses HDBSCAN + UMAP under the hood. Good for small-to-medium datasets.

FAISS+Leiden

Uses FAISS nearest-neighbor search to build a kNN similarity graph, then Leiden community detection to find clusters. Better for large datasets.

Why use FAISS+Leiden over BERTopic?

  • Scale — BERTopic's UMAP+HDBSCAN pipeline slows down significantly past ~50K strings. FAISS is built for large-scale similarity search and Leiden scales to graphs with millions of nodes.
  • Lighter dependencies — BERTopic pulls in hdbscan, umap-learn, and sentence-transformers. FAISS+Leiden only needs faiss-cpu and python-igraph.
  • Tunable graph construction — You control the kNN graph directly via n_neighbors and min_sim (minimum cosine similarity threshold), which often matters more than clustering algorithm parameters.

Comparison on 1,022 diet intervention strings:

Backend Clusters Outliers Outlier %
BERTopic (default) 65 170 16.7%
FAISS+Leiden (default) 63 178 17.5%
FAISS+Leiden (min_cluster_size=3) 75 142 13.9%
FAISS+Leiden (min_cluster_size=2) 98 93 9.1%

Both backends produce similar cluster quality. Outliers are handled downstream by the LLM classification step regardless of backend.

Install

pip install nobs-canonicalize

Requires python >= 3.11, < 3.15.

Example usage

OpenAI (BERTopic backend — default)

import os

from dotenv import load_dotenv
from rich import print

from nobs_canonicalize import nobs_canonicalize

load_dotenv()
openai_api_key = os.environ["OPENAI_API_KEY"]

texts = [
    "16/8 fasting",
    "16:8 fasting",
    "24-hour fasting",
    "24-hour one meal a day (OMAD) eating pattern",
    "2:1 ketogenic diet, low-glycemic-index diet",
    "30-day nutrition plan",
    "36-hour fast",
    "4-day fast",
    "40 hour fast, low carb meals",
    "4:3 fasting",
    "5-day fasting-mimicking diet (FMD) program",
    "7 day fast",
    "84-hour fast",
    "90/10 diet",
    "Adjusting macro and micro nutrient intake",
    "Adjusting target macros",
    "Macro and micro nutrient intake",
    "AllerPro formula",
    "Alternate Day Fasting (ADF), One Meal A Day (OMAD)",
    "American cheese",
    "Atkin's diet",
    "Atkins diet",
    "Avoid seed oils",
    "Avoiding seed oils",
    "Limiting seed oils",
    "Limited seed oils and processed foods",
    "Avoiding seed oils and processed foods",
]

clusters = nobs_canonicalize(
    texts=texts,
    openai_api_key=openai_api_key,
    reasoning_effort="low",  # low, medium, high
    subject="personal diet intervention outcomes",
)
print(clusters)

OpenAI (FAISS+Leiden backend)

clusters = nobs_canonicalize(
    texts=texts,
    openai_api_key=openai_api_key,
    reasoning_effort="low",
    subject="personal diet intervention outcomes",
    backend="faiss_leiden",  # use FAISS+Leiden instead of BERTopic
)

Azure OpenAI

import os

from dotenv import load_dotenv

from nobs_canonicalize import nobs_canonicalize_azure, AzureConfig

load_dotenv()

azure_config = AzureConfig(
    api_key=os.environ["AZURE_OPENAI_API_KEY"],
    api_version="2024-12-01-preview",
    azure_endpoint="https://your-resource.openai.azure.com/",
    embedding_deployment="text-embedding-3-large",  # default
    llm_deployment="o3-mini",                        # default
)

clusters = nobs_canonicalize_azure(
    texts=texts,
    reasoning_effort="low",
    subject="personal diet intervention outcomes",
    azure_config=azure_config,
    backend="faiss_leiden",  # optional, defaults to "bertopic"
)
print(clusters)

Example output

output

Contributing

git clone git@github.com:borisdev/nobs-canonicalize.git
cd nobs-canonicalize
pip install -e .
# set the OPENAI_API_KEY in the code or as an environment variable
poetry run pytest tests/test_models.py -v  # unit tests, no API key needed
poetry run pytest tests/test_main.py::test_nobs_canonicalize -v  # integration test

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nobs_canonicalize-0.7.1.tar.gz (22.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

nobs_canonicalize-0.7.1-py3-none-any.whl (24.6 kB view details)

Uploaded Python 3

File details

Details for the file nobs_canonicalize-0.7.1.tar.gz.

File metadata

  • Download URL: nobs_canonicalize-0.7.1.tar.gz
  • Upload date:
  • Size: 22.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.3 CPython/3.13.8 Darwin/25.2.0

File hashes

Hashes for nobs_canonicalize-0.7.1.tar.gz
Algorithm Hash digest
SHA256 9ee112c43fa2aa50598824e1820328c9e0eee1fa5ebe65d0dfdd7e41461611cf
MD5 d930ad48a53534d81e7deba996e36370
BLAKE2b-256 8336ab2eb07dc48bf64c5603a49cbca55a2299007f106c949c1087dac684d4b0

See more details on using hashes here.

File details

Details for the file nobs_canonicalize-0.7.1-py3-none-any.whl.

File metadata

  • Download URL: nobs_canonicalize-0.7.1-py3-none-any.whl
  • Upload date:
  • Size: 24.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.3 CPython/3.13.8 Darwin/25.2.0

File hashes

Hashes for nobs_canonicalize-0.7.1-py3-none-any.whl
Algorithm Hash digest
SHA256 890b27970c9a6e812ab6b961402402b14af730ccd94620c59412ec4ea0555d31
MD5 858540a02dc371e288e3a1bf76f53d68
BLAKE2b-256 1d639f2c927bae6f7125c704af5cf36ac3537fe168afae614791c99015ad7db4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page