Skip to main content

Latin text preprocessing: U/V normalization, long-s correction, and more

Project description

latincy-preprocess

Latin text preprocessing: U/V normalization, long-s OCR correction, diacritics stripping, and macron removal — with optional Rust acceleration and spaCy integration.

Consolidates latincy-uv and latincy-long-s into a single package.

Installation

pip install latincy-preprocess

For spaCy pipeline components:

pip install latincy-preprocess[spacy]

Quick Start

from latincy_preprocess import normalize

normalize("Gallia eft omnis diuisa in partes tres")
# 'Gallia est omnis divisa in partes tres'

Per-Normalizer Usage

U/V Normalization

Converts u-only Latin spelling to proper u/v distinction using rule-based analysis:

from latincy_preprocess import normalize_uv

normalize_uv("Arma uirumque cano")
# 'Arma virumque cano'

Rules handle digraphs (qu), trigraphs (ngu), morphological exceptions (cui, fuit), positional context (initial, intervocalic, post-consonant), and case preservation.

Long-S OCR Correction

Corrects OCR errors where historical long-s (ſ) was misread as f, using n-gram frequency analysis from Latin treebank data:

from latincy_preprocess import LongSNormalizer

normalizer = LongSNormalizer()

word, rules = normalizer.normalize_word_full("ftatua")
# ('statua', [TransformationRule(...)])

text = normalizer.normalize_text_full("funt in fundamento reipublicae ftatua")
# 'sunt in fundamento reipublicae statua'

Two-pass strategy: Pass 1 applies high-confidence rules (impossible bigrams like ft, fp, fc). Pass 2 uses 4-gram frequency disambiguation for ambiguous word-initial f- patterns.

Diacritics and Macrons

from latincy_preprocess import strip_diacritics, strip_macrons

strip_macrons("ārma")
# 'arma'

strip_diacritics("λόγος")
# 'λογος'

spaCy Integration

Three pipeline components are available as spaCy factories:

Unified Preprocessor (recommended)

Chains long-s correction → U/V normalization in the correct order:

import spacy

nlp = spacy.blank("la")
nlp.add_pipe("latin_preprocessor")

doc = nlp("Gallia eft omnis diuisa in partes tres")
doc._.preprocessed          # 'Gallia est omnis divisa in partes tres'
doc[2]._.preprocessed       # 'est'
doc[2]._.preprocessed_lemma # normalized lemma

Either normalizer can be disabled:

nlp.add_pipe("latin_preprocessor", config={"uv": False})
nlp.add_pipe("latin_preprocessor", config={"long_s": False})

Standalone Components

nlp.add_pipe("uv_normalizer")
# doc._.uv_normalized, token._.uv_normalized, token._.uv_normalized_lemma

nlp.add_pipe("long_s_normalizer")
# doc._.long_s_normalized, token._.long_s_normalized

Rust Backend

When compiled with maturin, a Rust backend provides ~3x throughput for both normalizers. The backend is selected automatically:

from latincy_preprocess import backend

backend()  # 'rust' or 'python'

The Python backend is fully functional and used as the fallback.

Accuracy

U/V Normalization

Dataset Accuracy
Curated test set (100 sentences) 100%
UD Latin PROIEL (~21K u/v chars) ~98%
UD Latin Perseus (~18K u/v chars) ~97%

Long-S Correction

Pass 1 rules have a 0.00% false positive rate. Pass 2 disambiguation uses a protected allowlist of ~170 common Latin f- words (inline in long_s/_rules.py) plus n-gram frequency tables (JSON files in long_s/data/ngrams/).

Changelog

0.1.1

  • Fix: strip_diacritics() no longer lowercases text — now preserves original case. Lowercasing was an unintended side effect conflating two separate operations.

0.1.0

  • Initial release: U/V normalization, long-s OCR correction, diacritics stripping, macron removal, spaCy integration, optional Rust backend.

Citation

@software{latincy_preprocess,
  title = {latincy-preprocess: Text Preprocessing for LatinCy Projects},
  author = {Burns, Patrick J.},
  year = {2026},
  url = {https://github.com/diyclassics/latincy-preprocess}
}

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

latincy_preprocess-0.1.2-cp312-cp312-macosx_11_0_arm64.whl (321.7 kB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

latincy_preprocess-0.1.2-cp311-cp311-macosx_11_0_arm64.whl (324.1 kB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

File details

Details for the file latincy_preprocess-0.1.2-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for latincy_preprocess-0.1.2-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 b15d2e6970e2f0d619a0ee40d3854f1258dc89b7d647cc14f2986026bfa0289e
MD5 0df19405cefe0eafe99fb685c5a50327
BLAKE2b-256 fa09158498feb4d227406cca1f3afadf928d05d0ffb9c404415269a879dc4837

See more details on using hashes here.

File details

Details for the file latincy_preprocess-0.1.2-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for latincy_preprocess-0.1.2-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 9a4bfa723586e0cba5aa89f8b828789046933b54a462408345a4cfa40e2b593f
MD5 0a854170f28910a889f72846fd0845b5
BLAKE2b-256 ca7b226f426080394a5b726aa871000e0e3b362ead3e16a5bdf70a36277b9fb5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page