Extract structured statements from text using T5-Gemma 2 and Diverse Beam Search
Project description
Corp Extractor
Extract structured subject-predicate-object statements from unstructured text using the T5-Gemma 2 model.
Features
- Structured Extraction: Converts unstructured text into subject-predicate-object triples
- Entity Type Recognition: Identifies 12 entity types (ORG, PERSON, GPE, LOC, PRODUCT, EVENT, etc.)
- Combined Quality Scoring (v0.3.0): Confidence combines semantic similarity (50%) + subject/object noun scores (25% each)
- spaCy-First Predicates (v0.3.0): Always uses spaCy for predicate extraction (model predicates are unreliable)
- Multi-Candidate Extraction (v0.3.0): Generates 3 candidates per statement (hybrid, spaCy-only, predicate-split)
- Best Triple Selection (v0.3.0): Keeps only highest-scoring triple per source (use
--all-triplesto keep all) - Extraction Method Tracking (v0.3.0): Each statement includes
extraction_methodfield (hybrid, spacy, split, model) - Beam Merging (v0.2.0): Combines top beams for better coverage instead of picking one
- Embedding-based Dedup (v0.2.0): Uses semantic similarity to detect near-duplicate predicates
- Predicate Taxonomies (v0.2.0): Normalize predicates to canonical forms via embeddings
- Contextualized Matching (v0.2.2): Compares full "Subject Predicate Object" against source text for better accuracy
- Entity Type Merging (v0.2.3): Automatically merges UNKNOWN entity types with specific types during deduplication
- Reversal Detection (v0.2.3): Detects and corrects subject-object reversals using embedding comparison
- Command Line Interface (v0.2.4): Full-featured CLI for terminal usage
- Multiple Output Formats: Get results as Pydantic models, JSON, XML, or dictionaries
Installation
pip install corp-extractor
The spaCy model for predicate inference is downloaded automatically on first use.
Note: This package requires transformers>=5.0.0 for T5-Gemma2 model support.
For GPU support, install PyTorch with CUDA first:
pip install torch --index-url https://download.pytorch.org/whl/cu121
pip install corp-extractor
For Apple Silicon (M1/M2/M3), MPS acceleration is automatically detected:
pip install corp-extractor # MPS used automatically
Quick Start
from statement_extractor import extract_statements
result = extract_statements("""
Apple Inc. announced the iPhone 15 at their September event.
Tim Cook presented the new features to customers worldwide.
""")
for stmt in result:
print(f"{stmt.subject.text} ({stmt.subject.type})")
print(f" --[{stmt.predicate}]--> {stmt.object.text}")
print(f" Confidence: {stmt.confidence_score:.2f}") # NEW in v0.2.0
Command Line Interface
The library includes a CLI for quick extraction from the terminal.
Install Globally (Recommended)
For best results, install globally first:
# Using uv (recommended)
uv tool install "corp-extractor[embeddings]"
# Using pipx
pipx install "corp-extractor[embeddings]"
# Using pip
pip install "corp-extractor[embeddings]"
# Then use anywhere
corp-extractor "Your text here"
Quick Run with uvx
Run directly without installing using uv:
uvx corp-extractor "Apple announced a new iPhone."
Note: First run downloads the model (~1.5GB) which may take a few minutes.
Usage Examples
# Extract from text argument
corp-extractor "Apple Inc. announced the iPhone 15 at their September event."
# Extract from file
corp-extractor -f article.txt
# Pipe from stdin
cat article.txt | corp-extractor -
# Output as JSON
corp-extractor "Tim Cook is CEO of Apple." --json
# Output as XML
corp-extractor -f article.txt --xml
# Verbose output with confidence scores
corp-extractor -f article.txt --verbose
# Use more beams for better quality
corp-extractor -f article.txt --beams 8
# Use custom predicate taxonomy
corp-extractor -f article.txt --taxonomy predicates.txt
# Use GPU explicitly
corp-extractor -f article.txt --device cuda
CLI Options
Usage: corp-extractor [OPTIONS] [TEXT]
Options:
-f, --file PATH Read input from file
-o, --output [table|json|xml] Output format (default: table)
--json Output as JSON (shortcut)
--xml Output as XML (shortcut)
-b, --beams INTEGER Number of beams (default: 4)
--diversity FLOAT Diversity penalty (default: 1.0)
--max-tokens INTEGER Max tokens to generate (default: 2048)
--no-dedup Disable deduplication
--no-embeddings Disable embedding-based dedup (faster)
--no-merge Disable beam merging
--no-spacy Disable spaCy extraction (use raw model output)
--all-triples Keep all candidate triples (default: best per source)
--dedup-threshold FLOAT Deduplication threshold (default: 0.65)
--min-confidence FLOAT Min confidence filter (default: 0)
--taxonomy PATH Load predicate taxonomy from file
--taxonomy-threshold FLOAT Taxonomy matching threshold (default: 0.5)
--device [auto|cuda|mps|cpu] Device to use (default: auto)
-v, --verbose Show confidence scores and metadata
-q, --quiet Suppress progress messages
--version Show version
--help Show this message
New in v0.2.0: Quality Scoring & Beam Merging
By default, the library now:
- Scores each triple for groundedness based on whether entities appear in source text
- Merges top beams instead of selecting one, improving coverage
- Uses embeddings to detect semantically similar predicates ("bought" ≈ "acquired")
from statement_extractor import ExtractionOptions, ScoringConfig
# Precision mode - filter low-confidence triples
scoring = ScoringConfig(min_confidence=0.7)
options = ExtractionOptions(scoring_config=scoring)
result = extract_statements(text, options)
# Access confidence scores
for stmt in result:
print(f"{stmt} (confidence: {stmt.confidence_score:.2f})")
New in v0.2.0: Predicate Taxonomies
Normalize predicates to canonical forms using embedding similarity:
from statement_extractor import PredicateTaxonomy, ExtractionOptions
taxonomy = PredicateTaxonomy(predicates=[
"acquired", "founded", "works_for", "announced",
"invested_in", "partnered_with"
])
options = ExtractionOptions(predicate_taxonomy=taxonomy)
result = extract_statements(text, options)
# "bought" -> "acquired" via embedding similarity
for stmt in result:
if stmt.canonical_predicate:
print(f"{stmt.predicate} -> {stmt.canonical_predicate}")
New in v0.2.2: Contextualized Matching
Predicate canonicalization and deduplication now use contextualized matching:
- Compares full "Subject Predicate Object" strings against source text
- Better accuracy because predicates are evaluated in context
- When duplicates are found, keeps the statement with the best match to source text
This means "Apple bought Beats" vs "Apple acquired Beats" are compared holistically, not just "bought" vs "acquired".
New in v0.2.3: Entity Type Merging & Reversal Detection
Entity Type Merging
When deduplicating statements, entity types are now automatically merged. If one statement has UNKNOWN type and a duplicate has a specific type (like ORG or PERSON), the specific type is preserved:
# Before deduplication:
# Statement 1: AtlasBio Labs (UNKNOWN) --sued by--> CuraPharm (ORG)
# Statement 2: AtlasBio Labs (ORG) --sued by--> CuraPharm (ORG)
# After deduplication:
# Single statement: AtlasBio Labs (ORG) --sued by--> CuraPharm (ORG)
Subject-Object Reversal Detection
The library now detects when subject and object may have been extracted in the wrong order by comparing embeddings against source text:
from statement_extractor import PredicateComparer
comparer = PredicateComparer()
# Automatically detect and fix reversals
fixed_statements = comparer.detect_and_fix_reversals(statements)
for stmt in fixed_statements:
if stmt.was_reversed:
print(f"Fixed reversal: {stmt}")
How it works:
- For each statement with source text, compares:
- "Subject Predicate Object" embedding vs source text
- "Object Predicate Subject" embedding vs source text
- If the reversed form has higher similarity, swaps subject and object
- Sets
was_reversed=Trueto indicate the correction
During deduplication, reversed duplicates (e.g., "A -> P -> B" and "B -> P -> A") are now detected and merged, with the correct orientation determined by source text similarity.
New in v0.3.0: spaCy-First Extraction & Semantic Scoring
v0.3.0 introduces significant improvements to extraction quality:
spaCy-First Predicate Extraction
The T5-Gemma model is excellent at:
- Triple isolation - identifying that a relationship exists
- Coreference resolution - resolving pronouns to named entities
But unreliable at:
- Predicate extraction - often returns empty or wrong predicates
Solution: v0.3.0 always uses spaCy for predicate extraction. The model provides subject, object, entity types, and source text; spaCy provides the predicate.
Three Candidate Extraction Methods
For each statement, three candidates are generated and the best is selected:
| Method | Description |
|---|---|
hybrid |
Model subject/object + spaCy predicate |
spacy |
All components from spaCy dependency parsing |
split |
Source text split around the predicate |
for stmt in result:
print(f"{stmt.subject.text} --[{stmt.predicate}]--> {stmt.object.text}")
print(f" Method: {stmt.extraction_method}") # hybrid, spacy, split, or model
print(f" Confidence: {stmt.confidence_score:.2f}")
Combined Quality Scoring
Confidence scores combine semantic similarity and grammatical accuracy:
| Component | Weight | Description |
|---|---|---|
| Semantic similarity | 50% | Cosine similarity between source text and reassembled triple |
| Subject noun score | 25% | How noun-like the subject is |
| Object noun score | 25% | How noun-like the object is |
Noun scoring:
- Proper noun(s) only: 1.0
- Common noun(s) only: 0.8
- Contains noun + other words: 0.4-0.8 (based on ratio)
- No nouns: 0.2
This ensures extracted subjects and objects are grammatically valid entities, not fragments or verb phrases.
Extraction Method Tracking
Each statement now includes an extraction_method field:
hybrid- Model subject/object + spaCy predicatespacy- All components from spaCy dependency parsingsplit- Subject/object from splitting source text around predicatemodel- All components from T5-Gemma model (only when--no-spacy)
Best Triple Selection
By default, only the highest-scoring triple is kept for each source sentence. This ensures clean output without redundant candidates.
To keep all candidate triples (for debugging or analysis):
options = ExtractionOptions(all_triples=True)
result = extract_statements(text, options)
Or via CLI:
corp-extractor "Your text" --all-triples --verbose
Disable spaCy extraction to use only model output:
options = ExtractionOptions(use_spacy_extraction=False)
result = extract_statements(text, options)
Or via CLI:
corp-extractor "Your text" --no-spacy
Disable Embeddings
options = ExtractionOptions(
embedding_dedup=False, # Use exact text matching
merge_beams=False, # Select single best beam
)
result = extract_statements(text, options)
Output Formats
from statement_extractor import (
extract_statements,
extract_statements_as_json,
extract_statements_as_xml,
extract_statements_as_dict,
)
# Pydantic models (default)
result = extract_statements(text)
# JSON string
json_output = extract_statements_as_json(text)
# Raw XML (model's native format)
xml_output = extract_statements_as_xml(text)
# Python dictionary
dict_output = extract_statements_as_dict(text)
Batch Processing
from statement_extractor import StatementExtractor
extractor = StatementExtractor(device="cuda") # or "mps" (Apple Silicon) or "cpu"
texts = ["Text 1...", "Text 2...", "Text 3..."]
for text in texts:
result = extractor.extract(text)
print(f"Found {len(result)} statements")
Entity Types
| Type | Description | Example |
|---|---|---|
ORG |
Organizations | Apple Inc., United Nations |
PERSON |
People | Tim Cook, Elon Musk |
GPE |
Geopolitical entities | USA, California, Paris |
LOC |
Non-GPE locations | Mount Everest, Pacific Ocean |
PRODUCT |
Products | iPhone, Model S |
EVENT |
Events | World Cup, CES 2024 |
WORK_OF_ART |
Creative works | Mona Lisa, Game of Thrones |
LAW |
Legal documents | GDPR, Clean Air Act |
DATE |
Dates | 2024, January 15 |
MONEY |
Monetary values | $50 million, €100 |
PERCENT |
Percentages | 25%, 0.5% |
QUANTITY |
Quantities | 500 employees, 1.5 tons |
UNKNOWN |
Unrecognized | (fallback) |
How It Works
This library uses the T5-Gemma 2 statement extraction model with Diverse Beam Search (Vijayakumar et al., 2016):
- Diverse Beam Search: Generates 4+ candidate outputs using beam groups with diversity penalty
- Quality Scoring (v0.2.0): Each triple scored for groundedness in source text
- Beam Merging (v0.2.0): Top beams combined for better coverage
- Embedding Dedup (v0.2.0): Semantic similarity removes near-duplicate predicates
- Predicate Normalization (v0.2.0): Optional taxonomy matching via embeddings
- Contextualized Matching (v0.2.2): Full statement context used for canonicalization and dedup
- Entity Type Merging (v0.2.3): UNKNOWN types merged with specific types during dedup
- Reversal Detection (v0.2.3): Subject-object reversals detected and corrected via embedding comparison
- Hybrid spaCy (v0.2.12): spaCy candidates added to pool alongside model output for better coverage
Requirements
- Python 3.10+
- PyTorch 2.0+
- Transformers 5.0+
- Pydantic 2.0+
- sentence-transformers 2.2+
- spaCy 3.5+ (model downloaded automatically on first use)
- ~2GB VRAM (GPU) or ~4GB RAM (CPU)
Links
License
MIT License - see LICENSE file for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file corp_extractor-0.3.0.tar.gz.
File metadata
- Download URL: corp_extractor-0.3.0.tar.gz
- Upload date:
- Size: 34.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.6.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
67797f6b1bf14db23eea7f0fa7f64d8c0ccb35f8a009a7316ea796f31e07171a
|
|
| MD5 |
f97e2a5823fe3bdcd62a769e763576d5
|
|
| BLAKE2b-256 |
eed85d5b7db21bd539fbbccb754c88f5db7a1f6ea2514ecaced9b6d42df2ca94
|
File details
Details for the file corp_extractor-0.3.0-py3-none-any.whl.
File metadata
- Download URL: corp_extractor-0.3.0-py3-none-any.whl
- Upload date:
- Size: 40.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.6.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c63bb8d6f3c1ef9ce112c70a18b6a8e9d618ad5afd7b40e8f3aff25a67b1afec
|
|
| MD5 |
ead2fcc4d08d6753d216b4d6bc6af770
|
|
| BLAKE2b-256 |
19a47f5cf2c7d932f590a5cf5ee8f8969ace3594a0bd4d0ca9e17f4169102d3a
|