Building blocks for spacy Matcher patterns
Project description
corpus-patterns
A preparatory utils library.
Create a custom tokenizer
from corpus_patterns import set_tokenizer
nlp = spacy.blank("en")
nlp.tokenizer = set_tokenizer(nlp)
The tokenizer:
- Removes dashes from infixes
- Adds prefix/suffix rules for parenthesis/brackets
- Adds special exceptions to treat dotted text as a single token
Use with modified config file:
@spacy.registry.tokenizers("test") # type: ignore
def create_corpus_tokenizer():
def create_tokenizer(nlp):
return set_tokenizer(nlp)
return create_tokenizer
nlp = spacy.load("en_core_web_sm", config={"nlp": {"tokenizer": {"@tokenizers": "test"}}},
)
Add .jsonl files to directory
Each file will contain lines of spacy matcher patterns.
from corpus_patterns import create_rules
from pathlib import Path
create_rules(folder=Path("location-here")) # check directory
Utils
annotate_fragments()
- given an nlp object and some*.txt
files, create a single annotation*.jsonl
fileextract_lines_from_txt_files()
- accepts an iterator of*.txt
files and yields each line (after sorting the same and ensuring uniqueness of content).split_data()
- given a list of text strings, split the same into two groups and return a dictionary containing these groups based on the ratio provided (defaults to 0.80)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
corpus_patterns-0.1.0.tar.gz
(17.2 kB
view hashes)
Built Distribution
Close
Hashes for corpus_patterns-0.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 20ef0ce29f851ebeca7188a5749ef476c592b0d9cbce2e9e43b55d0c08e63f68 |
|
MD5 | 98e4d67d3f941e8da0b26cd4951f6704 |
|
BLAKE2b-256 | 10aeda5c5e0ce7be5e5608868efa23ef2967610cbee1fe1a41ce0804ba691a5c |