Skip to main content

Extract structured statements from text using T5-Gemma 2 and Diverse Beam Search

Project description

Corp Extractor

Extract structured subject-predicate-object statements from unstructured text using the T5-Gemma 2 model.

PyPI version Python 3.10+ License: MIT

Features

  • 6-Stage Pipeline (v0.5.0): Modular plugin-based architecture for full entity resolution
  • Structured Extraction: Converts unstructured text into subject-predicate-object triples
  • Entity Type Recognition: Identifies 12 entity types (ORG, PERSON, GPE, LOC, PRODUCT, EVENT, etc.)
  • Entity Qualification (v0.5.0): Adds roles, identifiers (LEI, ticker, company numbers) via external APIs
  • Canonicalization (v0.5.0): Resolves entities to canonical forms with fuzzy matching
  • Statement Labeling (v0.5.0): Sentiment analysis, relation type classification, confidence scoring
  • GLiNER2 Integration (v0.4.0): Uses GLiNER2 (205M params) for entity recognition and relation extraction
  • Predefined Predicates: Optional --predicates list for GLiNER2 relation extraction mode
  • Beam Merging: Combines top beams for better coverage instead of picking one
  • Embedding-based Dedup: Uses semantic similarity to detect near-duplicate predicates
  • Predicate Taxonomies: Normalize predicates to canonical forms via embeddings
  • Command Line Interface: Full-featured CLI with split, pipeline, and plugins commands
  • Multiple Output Formats: Get results as Pydantic models, JSON, XML, or dictionaries

Installation

pip install corp-extractor

The GLiNER2 model (205M params) is downloaded automatically on first use.

Note: This package requires transformers>=5.0.0 for T5-Gemma2 model support.

For GPU support, install PyTorch with CUDA first:

pip install torch --index-url https://download.pytorch.org/whl/cu121
pip install corp-extractor

For Apple Silicon (M1/M2/M3), MPS acceleration is automatically detected:

pip install corp-extractor  # MPS used automatically

Quick Start

from statement_extractor import extract_statements

result = extract_statements("""
    Apple Inc. announced the iPhone 15 at their September event.
    Tim Cook presented the new features to customers worldwide.
""")

for stmt in result:
    print(f"{stmt.subject.text} ({stmt.subject.type})")
    print(f"  --[{stmt.predicate}]--> {stmt.object.text}")
    print(f"  Confidence: {stmt.confidence_score:.2f}")  # NEW in v0.2.0

Command Line Interface

The library includes a CLI for quick extraction from the terminal.

Install Globally (Recommended)

For best results, install globally first:

# Using uv (recommended)
uv tool install "corp-extractor[embeddings]"

# Using pipx
pipx install "corp-extractor[embeddings]"

# Using pip
pip install "corp-extractor[embeddings]"

# Then use anywhere
corp-extractor "Your text here"

Quick Run with uvx

Run directly without installing using uv:

uvx corp-extractor "Apple announced a new iPhone."

Note: First run downloads the model (~1.5GB) which may take a few minutes.

Usage Examples

The CLI provides three main commands: split, pipeline, and plugins.

# Simple extraction (Stage 1 only, fast)
corp-extractor split "Apple Inc. announced the iPhone 15."
corp-extractor split -f article.txt --json

# Full 6-stage pipeline (entity resolution, canonicalization, labeling, taxonomy)
corp-extractor pipeline "Amazon CEO Andy Jassy announced plans to hire workers."
corp-extractor pipeline -f article.txt --stages 1-3
corp-extractor pipeline "..." --disable-plugins sec_edgar

# Plugin management
corp-extractor plugins list
corp-extractor plugins list --stage 3
corp-extractor plugins info gleif_qualifier

Split Command (Simple Extraction)

corp-extractor split "Tim Cook is CEO of Apple." --json
corp-extractor split -f article.txt --beams 8 --verbose
cat article.txt | corp-extractor split -

Pipeline Command (Full Entity Resolution)

# Run all 5 stages
corp-extractor pipeline "Apple CEO Tim Cook announced..."

# Run specific stages
corp-extractor pipeline "..." --stages 1-3         # Stages 1, 2, 3
corp-extractor pipeline "..." --stages 1,2,5       # Stages 1, 2, 5
corp-extractor pipeline "..." --skip-stages 4,5    # Skip stages 4 and 5

# Plugin selection
corp-extractor pipeline "..." --plugins gleif,companies_house
corp-extractor pipeline "..." --disable-plugins sec_edgar

CLI Reference

Usage: corp-extractor [COMMAND] [OPTIONS]

Commands:
  split      Simple extraction (T5-Gemma only)
  pipeline   Full 5-stage pipeline with entity resolution
  plugins    List or inspect available plugins

Split Options:
  -f, --file PATH              Read input from file
  -o, --output [table|json|xml] Output format (default: table)
  --json / --xml               Output format shortcuts
  -b, --beams INTEGER          Number of beams (default: 4)
  --no-gliner                  Disable GLiNER2 extraction
  --predicates TEXT            Comma-separated predicates for relation extraction
  --device [auto|cuda|mps|cpu] Device to use (default: auto)
  -v, --verbose                Show confidence scores and metadata

Pipeline Options:
  --stages TEXT                Stages to run (e.g., '1-3' or '1,2,5')
  --skip-stages TEXT           Stages to skip (e.g., '4,5')
  --plugins TEXT               Enable only these plugins (comma-separated)
  --disable-plugins TEXT       Disable these plugins (comma-separated)
  -o, --output [table|json|yaml|triples]  Output format

New in v0.2.0: Quality Scoring & Beam Merging

By default, the library now:

  • Scores each triple for groundedness based on whether entities appear in source text
  • Merges top beams instead of selecting one, improving coverage
  • Uses embeddings to detect semantically similar predicates ("bought" ≈ "acquired")
from statement_extractor import ExtractionOptions, ScoringConfig

# Precision mode - filter low-confidence triples
scoring = ScoringConfig(min_confidence=0.7)
options = ExtractionOptions(scoring_config=scoring)
result = extract_statements(text, options)

# Access confidence scores
for stmt in result:
    print(f"{stmt} (confidence: {stmt.confidence_score:.2f})")

New in v0.2.0: Predicate Taxonomies

Normalize predicates to canonical forms using embedding similarity:

from statement_extractor import PredicateTaxonomy, ExtractionOptions

taxonomy = PredicateTaxonomy(predicates=[
    "acquired", "founded", "works_for", "announced",
    "invested_in", "partnered_with"
])

options = ExtractionOptions(predicate_taxonomy=taxonomy)
result = extract_statements(text, options)

# "bought" -> "acquired" via embedding similarity
for stmt in result:
    if stmt.canonical_predicate:
        print(f"{stmt.predicate} -> {stmt.canonical_predicate}")

New in v0.2.2: Contextualized Matching

Predicate canonicalization and deduplication now use contextualized matching:

  • Compares full "Subject Predicate Object" strings against source text
  • Better accuracy because predicates are evaluated in context
  • When duplicates are found, keeps the statement with the best match to source text

This means "Apple bought Beats" vs "Apple acquired Beats" are compared holistically, not just "bought" vs "acquired".

New in v0.2.3: Entity Type Merging & Reversal Detection

Entity Type Merging

When deduplicating statements, entity types are now automatically merged. If one statement has UNKNOWN type and a duplicate has a specific type (like ORG or PERSON), the specific type is preserved:

# Before deduplication:
# Statement 1: AtlasBio Labs (UNKNOWN) --sued by--> CuraPharm (ORG)
# Statement 2: AtlasBio Labs (ORG) --sued by--> CuraPharm (ORG)

# After deduplication:
# Single statement: AtlasBio Labs (ORG) --sued by--> CuraPharm (ORG)

Subject-Object Reversal Detection

The library now detects when subject and object may have been extracted in the wrong order by comparing embeddings against source text:

from statement_extractor import PredicateComparer

comparer = PredicateComparer()

# Automatically detect and fix reversals
fixed_statements = comparer.detect_and_fix_reversals(statements)

for stmt in fixed_statements:
    if stmt.was_reversed:
        print(f"Fixed reversal: {stmt}")

How it works:

  1. For each statement with source text, compares:
    • "Subject Predicate Object" embedding vs source text
    • "Object Predicate Subject" embedding vs source text
  2. If the reversed form has higher similarity, swaps subject and object
  3. Sets was_reversed=True to indicate the correction

During deduplication, reversed duplicates (e.g., "A -> P -> B" and "B -> P -> A") are now detected and merged, with the correct orientation determined by source text similarity.

New in v0.5.0: Pipeline Architecture

v0.5.0 introduces a 6-stage plugin-based pipeline for comprehensive entity resolution, statement enrichment, and taxonomy classification.

Pipeline Stages

Stage Name Input Output Key Tech
1 Splitting Text RawTriple[] T5-Gemma2
2 Extraction RawTriple[] PipelineStatement[] GLiNER2
3 Qualification Entities QualifiedEntity[] Gemma3, APIs
4 Canonicalization QualifiedEntity[] CanonicalEntity[] Fuzzy matching
5 Labeling Statements LabeledStatement[] Sentiment, etc.
6 Taxonomy Statements TaxonomyResult[] MNLI, Embeddings

Pipeline Python API

from statement_extractor.pipeline import ExtractionPipeline, PipelineConfig

# Run full pipeline
pipeline = ExtractionPipeline()
ctx = pipeline.process("Amazon CEO Andy Jassy announced plans to hire workers.")

# Access results at each stage
print(f"Raw triples: {len(ctx.raw_triples)}")
print(f"Statements: {len(ctx.statements)}")
print(f"Labeled: {len(ctx.labeled_statements)}")

# Output with fully qualified names
for stmt in ctx.labeled_statements:
    print(f"{stmt.subject_fqn} --[{stmt.statement.predicate}]--> {stmt.object_fqn}")
    # e.g., "Andy Jassy (CEO, Amazon) --[announced]--> plans to hire workers"

Pipeline Configuration

from statement_extractor.pipeline import PipelineConfig, ExtractionPipeline

# Run only specific stages
config = PipelineConfig(
    enabled_stages={1, 2, 3},  # Skip canonicalization and labeling
    disabled_plugins={"sec_edgar_qualifier"},  # Disable specific plugins
)
pipeline = ExtractionPipeline(config)
ctx = pipeline.process(text)

# Alternative: create config from stage string
config = PipelineConfig.from_stage_string("1-3")  # Stages 1, 2, 3

Built-in Plugins

Splitters (Stage 1):

  • t5_gemma_splitter - T5-Gemma2 statement extraction

Extractors (Stage 2):

  • gliner2_extractor - GLiNER2 entity recognition and relation extraction

Qualifiers (Stage 3):

  • person_qualifier - PERSON → role, org (uses Gemma3)
  • gleif_qualifier - ORG → LEI, jurisdiction (GLEIF API)
  • companies_house_qualifier - ORG → UK company number
  • sec_edgar_qualifier - ORG → SEC CIK, ticker

Canonicalizers (Stage 4):

  • organization_canonicalizer - ORG canonical names
  • person_canonicalizer - PERSON name variants

Labelers (Stage 5):

  • sentiment_labeler - Statement sentiment analysis

Taxonomy Classifiers (Stage 6):

  • mnli_taxonomy_classifier - MNLI zero-shot classification against ESG taxonomy
  • embedding_taxonomy_classifier - Embedding similarity-based taxonomy classification

Taxonomy classifiers return multiple labels per statement above the confidence threshold.

New in v0.6.0: Company Embedding Database

v0.6.0 introduces a company embedding database for fast entity qualification using vector similarity search.

Data Sources

Source Records Identifier
GLEIF ~3.2M LEI (Legal Entity Identifier)
SEC Edgar ~10K CIK (Central Index Key)
Companies House ~5M UK Company Number
Wikidata Variable Wikidata QID

Building the Database

# Import from authoritative sources
corp-extractor db import-gleif --download
corp-extractor db import-sec
corp-extractor db import-companies-house --download
corp-extractor db import-wikidata --limit 50000

# Check status
corp-extractor db status

# Search for a company
corp-extractor db search "Microsoft"

Using in Pipeline

The database is automatically used by the embedding_company_qualifier plugin for Stage 3 (Qualification):

from statement_extractor.pipeline import ExtractionPipeline

pipeline = ExtractionPipeline()
ctx = pipeline.process("Microsoft acquired Activision Blizzard.")

for stmt in ctx.labeled_statements:
    print(f"{stmt.subject_fqn}")  # e.g., "Microsoft (sec_edgar:0000789019)"

Publishing to HuggingFace

# Upload database
export HF_TOKEN="hf_..."
corp-extractor db upload ~/.cache/corp-extractor/companies.db

# Download pre-built database
corp-extractor db download

See COMPANY_DB.md for complete build and publish instructions.

New in v0.4.0: GLiNER2 Integration

v0.4.0 replaces spaCy with GLiNER2 (205M params) for entity recognition and relation extraction. GLiNER2 is a unified model that handles NER, text classification, structured data extraction, and relation extraction with CPU-optimized inference.

Why GLiNER2?

The T5-Gemma model excels at:

  • Triple isolation - identifying that a relationship exists
  • Coreference resolution - resolving pronouns to named entities

GLiNER2 now handles:

  • Entity recognition - refining subject/object boundaries
  • Relation extraction - using 324 default predicates across 21 categories
  • Entity scoring - scoring how "entity-like" subjects/objects are
  • Confidence scoring - real confidence values via include_confidence=True

Default Predicates

GLiNER2 uses 324 predicates organized into 21 categories (ownership, employment, funding, etc.). These are loaded from default_predicates.json and include descriptions and confidence thresholds.

Key features:

  • All matches returned - Every matching relation is returned, not just the best one
  • Category-based extraction - Iterates through categories to stay under GLiNER2's ~25 label limit
  • Custom predicate files - Provide your own JSON file with custom predicates

Extraction Modes

Mode 1: Default Predicates (recommended)

from statement_extractor import extract_statements

# Uses 324 built-in predicates automatically
result = extract_statements("John works for Apple Inc. in Cupertino.")
# Returns ALL matching relations

Mode 2: Custom Predicate List

from statement_extractor import extract_statements, ExtractionOptions

options = ExtractionOptions(predicates=["works_for", "founded", "acquired", "headquartered_in"])
result = extract_statements("John works for Apple Inc. in Cupertino.", options)

Or via CLI:

corp-extractor "John works for Apple Inc." --predicates "works_for,founded,acquired"

Mode 3: Custom Predicate File

from statement_extractor.pipeline import ExtractionPipeline, PipelineConfig

config = PipelineConfig(
    extractor_options={"predicates_file": "/path/to/custom_predicates.json"}
)
pipeline = ExtractionPipeline(config)
ctx = pipeline.process("John works for Apple Inc.")

Or via CLI:

corp-extractor pipeline "John works for Apple Inc." --predicates-file custom_predicates.json

Two Candidate Extraction Methods

For each statement, two candidates are generated and the best is selected:

Method Description
hybrid Model subject/object + GLiNER2/extracted predicate
gliner All components refined by GLiNER2 entity recognition
for stmt in result:
    print(f"{stmt.subject.text} --[{stmt.predicate}]--> {stmt.object.text}")
    print(f"  Method: {stmt.extraction_method}")  # hybrid, gliner, or model
    print(f"  Confidence: {stmt.confidence_score:.2f}")

Combined Quality Scoring

Confidence scores combine semantic similarity and entity recognition:

Component Weight Description
Semantic similarity 50% Cosine similarity between source text and reassembled triple
Subject entity score 25% How entity-like the subject is (via GLiNER2)
Object entity score 25% How entity-like the object is (via GLiNER2)

Entity scoring (via GLiNER2):

  • Recognized entity with high confidence: 1.0
  • Recognized entity with moderate confidence: 0.8
  • Partially recognized: 0.6
  • Not recognized: 0.2

Extraction Method Tracking

Each statement includes an extraction_method field:

  • hybrid - Model subject/object + GLiNER2 predicate
  • gliner - All components refined by GLiNER2 entity recognition
  • model - All components from T5-Gemma model (only when --no-gliner)

Best Triple Selection

By default, only the highest-scoring triple is kept for each source sentence.

To keep all candidate triples:

options = ExtractionOptions(all_triples=True)
result = extract_statements(text, options)

Or via CLI:

corp-extractor "Your text" --all-triples --verbose

Disable GLiNER2 extraction to use only model output:

options = ExtractionOptions(use_gliner_extraction=False)
result = extract_statements(text, options)

Or via CLI:

corp-extractor "Your text" --no-gliner

Disable Embeddings

options = ExtractionOptions(
    embedding_dedup=False,  # Use exact text matching
    merge_beams=False,      # Select single best beam
)
result = extract_statements(text, options)

Output Formats

from statement_extractor import (
    extract_statements,
    extract_statements_as_json,
    extract_statements_as_xml,
    extract_statements_as_dict,
)

# Pydantic models (default)
result = extract_statements(text)

# JSON string
json_output = extract_statements_as_json(text)

# Raw XML (model's native format)
xml_output = extract_statements_as_xml(text)

# Python dictionary
dict_output = extract_statements_as_dict(text)

Batch Processing

from statement_extractor import StatementExtractor

extractor = StatementExtractor(device="cuda")  # or "mps" (Apple Silicon) or "cpu"

texts = ["Text 1...", "Text 2...", "Text 3..."]
for text in texts:
    result = extractor.extract(text)
    print(f"Found {len(result)} statements")

Entity Types

Type Description Example
ORG Organizations Apple Inc., United Nations
PERSON People Tim Cook, Elon Musk
GPE Geopolitical entities USA, California, Paris
LOC Non-GPE locations Mount Everest, Pacific Ocean
PRODUCT Products iPhone, Model S
EVENT Events World Cup, CES 2024
WORK_OF_ART Creative works Mona Lisa, Game of Thrones
LAW Legal documents GDPR, Clean Air Act
DATE Dates 2024, January 15
MONEY Monetary values $50 million, €100
PERCENT Percentages 25%, 0.5%
QUANTITY Quantities 500 employees, 1.5 tons
UNKNOWN Unrecognized (fallback)

How It Works

This library uses the T5-Gemma 2 statement extraction model with Diverse Beam Search (Vijayakumar et al., 2016):

  1. Diverse Beam Search: Generates 4+ candidate outputs using beam groups with diversity penalty
  2. Quality Scoring: Each triple scored for groundedness in source text
  3. Beam Merging: Top beams combined for better coverage
  4. Embedding Dedup: Semantic similarity removes near-duplicate predicates
  5. Predicate Normalization: Optional taxonomy matching via embeddings
  6. Contextualized Matching: Full statement context used for canonicalization and dedup
  7. Entity Type Merging: UNKNOWN types merged with specific types during dedup
  8. Reversal Detection: Subject-object reversals detected and corrected via embedding comparison
  9. GLiNER2 Extraction (v0.4.0): Entity recognition and relation extraction for improved accuracy

Requirements

  • Python 3.10+
  • PyTorch 2.0+
  • Transformers 5.0+
  • Pydantic 2.0+
  • sentence-transformers 2.2+
  • GLiNER2 (model downloaded automatically on first use)
  • ~2GB VRAM (GPU) or ~4GB RAM (CPU)

Links

License

MIT License - see LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

corp_extractor-0.6.0.tar.gz (176.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

corp_extractor-0.6.0-py3-none-any.whl (215.5 kB view details)

Uploaded Python 3

File details

Details for the file corp_extractor-0.6.0.tar.gz.

File metadata

  • Download URL: corp_extractor-0.6.0.tar.gz
  • Upload date:
  • Size: 176.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.14

File hashes

Hashes for corp_extractor-0.6.0.tar.gz
Algorithm Hash digest
SHA256 d53fbdc20307a008a2937e003e94d6946d05e556f6834b9c7acf0218a4051f30
MD5 23ff16f62c36c1159ecb40a61b812ceb
BLAKE2b-256 68b00759ec3895f9984ca91550e69da1a903e0cea9fd0b45c9857633509ec691

See more details on using hashes here.

File details

Details for the file corp_extractor-0.6.0-py3-none-any.whl.

File metadata

File hashes

Hashes for corp_extractor-0.6.0-py3-none-any.whl
Algorithm Hash digest
SHA256 25adf48f0648da70b1a915e9bd9607395676f45e38468056fce8dd67bfe1e867
MD5 f598b5f16c8e570ef423be20ed3509f8
BLAKE2b-256 3ba969a5d1b0fc69075bd51161cf5a1f84faded09e7fcc102c105c56096d4b6b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page