Skip to main content

Legalese tokenization

Project description

lextok

Github CI

Rule-based tokenizer and pattern matching for basic Philippine entities using spacy.

[!IMPORTANT] Should be used in tandem with doclex

Quickstart

poetry env use 3.11.6 # 3.12 not yet supported
poetry install
poetry shell
python -m spacy download en_core_web_sm # base model

Rationale

Before

import spacy

nlp = spacy.load("en_core_web_sm")  # no modifications to the model
doc1 = nlp("Sec. 36(b)(21)")
for token in doc1:
    print(f"{token.text=} {token.pos_=} {token.ent_type_=}, {token.i=}")
"""
token.text='Sec' token.pos_='PROPN' token.ent_type_='ORG' token.i=0
token.text='.' token.pos_='PUNCT' token.ent_type_='' token.i=1
token.text='36(b)(21' token.pos_='NUM' token.ent_type_='CARDINAL' token.i=2
token.text=')' token.pos_='PUNCT' token.ent_type_='' token.i=3
"""

After

from lextok import lextok

lex = lextok()  # inclusion of custom tokenizer, attribute and entity ruler
doc2 = lex("Sec. 36(b)(21)")
for token in doc2:
    print(f"{token.text=} {token.pos_=} {token.ent_type_=} {token.i=}")
"""
token.text='Sec.' token.pos_='NOUN' token.ent_type_='ProvisionNum' token.i=0
token.text='36(b)(21)' token.pos_='NUM' token.ent_type_='ProvisionNum' token.i=1
"""

Token entities can be merged:

from lextok import lextok

lex = lextok(finalize_entities=True)
doc2 = lex("Sec. 36(b)(21)")
for token in doc2:
    print(f"{token.text=} {token.pos_=} {token.ent_type_=} {token.i=}")
"""
token.text='Sec. 36(b)(21)' token.pos_='NUM' token.ent_type_='ProvisionNum' token.i=0
"""

Pattern creation

A pattern consists of a list of tokens, e.g. space space between the word, a dot, and the number?

[
    {"ORTH": {"IN": ["Tit", "Bk", "Ch", "Sub-Chap", "Art", "Sec", "Par", "Sub-Par"]}},
    {"ORTH": "."},  # with dot
    {"POS": "NUM"},
]

This is another pattern where the dot is connected to the word:

[
    {
        "ORTH": {
            "IN": [
                "Tit.",
                "Bk.",
                "Ch.",
                "Sub-Chap.",
                "Art.",
                "Sec.",
                "Par.",
                "Sub-Par.",
            ]
        }
    },
    {"POS": "NUM"},
]  # no separate dot

There are many variations. It becomes possible to generate a list of patterns algorithmically and save them to a *.jsonl file, e.g.:

from lextok.entity_rules_citeable import statutory_provisions

print(statutory_provisions.patterns)  # view patterns
statutory_provisions.create_file()  # located in /lextok/rules/ if path not specified

Rules and Labels

Each Rule may consist of many patterns, and this collection of patterns can be associated with a Label.

In spacy parlance, the label represents the ENT_TYPE but for this library's purpose, it's also adopted for non-entities to cater to SpanRuler patterns.

To distinguish labels strictly for entities from labels for non-entities, a collection of labels is defined in SPAN_RULER_LABELS. If not included in this list of labels, then the implication is that the Rule's patterns ought to be governed by the EntityRuler; otherwise, the SpanRuler.

Considering the number of Rules declared (or to be declared), instead of importing each instance individually, these can be extracted dynamically with Rule.extract_from_files().

Existing data structures

from lextok import Label, ENTITY_RULES, SPAN_RULES

for label in Label:
    print(label.name)  # pattern labels
for e in ENTITY_RULES:
    print(e)
for s in SPAN_RULES:
    print(s)

Add more entity rules

Create a list of Rule objects, e.g.:

from lextok import lextok, Rule, ENTITY_RULES, Label

added_rules = [
    Rule(
        id="ministry-labor",
        label=Label.GovtDivision,
        patterns=[
            [
                {"LOWER": "the", "OP": "?"},
                {"LOWER": "ministry"},
                {"LOWER": "of"},
                {"LOWER": "labor"},
            ]
        ],
    ),
    Rule(
        id="intermediate-scrutiny",
        label=Label.Doctrine,
        patterns=[
            [
                {"LOWER": "test", "OP": "?"},
                {"LOWER": "of", "OP": "?"},
                {"LOWER": "intermediate"},
                {"LOWER": "scrutiny"},
                {"LEMMA": {"IN": ["test", "approach"]}, "OP": "?"},
            ]
        ],
    ),
]

# Include new rules in lextok language
nlp = lextok(finalize_entities=True, entity_rules=ENTITY_RULES + added_rules)

# Test detection
doc = nlp(
    "Lorem ipsum, sample text. The Ministry of Labor is a govt division. Hello world. The test of intermediate scrutiny is a constitutional law concept."
)
doc.ents  # (The Ministry of Labor, test of intermediate scrutiny)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lextok-0.0.27.tar.gz (22.3 kB view hashes)

Uploaded Source

Built Distribution

lextok-0.0.27-py3-none-any.whl (27.3 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page