Extract structured statements from text using T5-Gemma 2 and Diverse Beam Search
Project description
Corp Extractor
Extract structured subject-predicate-object statements from unstructured text using the T5-Gemma 2 model.
Features
- Structured Extraction: Converts unstructured text into subject-predicate-object triples
- Entity Type Recognition: Identifies 12 entity types (ORG, PERSON, GPE, LOC, PRODUCT, EVENT, etc.)
- Quality Scoring (v0.2.0): Each triple scored for groundedness (0-1) based on source text
- Beam Merging (v0.2.0): Combines top beams for better coverage instead of picking one
- Embedding-based Dedup (v0.2.0): Uses semantic similarity to detect near-duplicate predicates
- Predicate Taxonomies (v0.2.0): Normalize predicates to canonical forms via embeddings
- Multiple Output Formats: Get results as Pydantic models, JSON, XML, or dictionaries
Installation
# Recommended: include embedding support for smart deduplication
pip install corp-extractor[embeddings]
# Minimal installation (no embedding features)
pip install corp-extractor
Note: For GPU support, install PyTorch with CUDA first:
pip install torch --index-url https://download.pytorch.org/whl/cu121
pip install corp-extractor[embeddings]
Quick Start
from statement_extractor import extract_statements
result = extract_statements("""
Apple Inc. announced the iPhone 15 at their September event.
Tim Cook presented the new features to customers worldwide.
""")
for stmt in result:
print(f"{stmt.subject.text} ({stmt.subject.type})")
print(f" --[{stmt.predicate}]--> {stmt.object.text}")
print(f" Confidence: {stmt.confidence_score:.2f}") # NEW in v0.2.0
New in v0.2.0: Quality Scoring & Beam Merging
By default, the library now:
- Scores each triple for groundedness based on whether entities appear in source text
- Merges top beams instead of selecting one, improving coverage
- Uses embeddings to detect semantically similar predicates ("bought" ≈ "acquired")
from statement_extractor import ExtractionOptions, ScoringConfig
# Precision mode - filter low-confidence triples
scoring = ScoringConfig(min_confidence=0.7)
options = ExtractionOptions(scoring_config=scoring)
result = extract_statements(text, options)
# Access confidence scores
for stmt in result:
print(f"{stmt} (confidence: {stmt.confidence_score:.2f})")
New in v0.2.0: Predicate Taxonomies
Normalize predicates to canonical forms using embedding similarity:
from statement_extractor import PredicateTaxonomy, ExtractionOptions
taxonomy = PredicateTaxonomy(predicates=[
"acquired", "founded", "works_for", "announced",
"invested_in", "partnered_with"
])
options = ExtractionOptions(predicate_taxonomy=taxonomy)
result = extract_statements(text, options)
# "bought" -> "acquired" via embedding similarity
for stmt in result:
if stmt.canonical_predicate:
print(f"{stmt.predicate} -> {stmt.canonical_predicate}")
Disable Embeddings (Faster, No Extra Dependencies)
options = ExtractionOptions(
embedding_dedup=False, # Use exact text matching
merge_beams=False, # Select single best beam
)
result = extract_statements(text, options)
Output Formats
from statement_extractor import (
extract_statements,
extract_statements_as_json,
extract_statements_as_xml,
extract_statements_as_dict,
)
# Pydantic models (default)
result = extract_statements(text)
# JSON string
json_output = extract_statements_as_json(text)
# Raw XML (model's native format)
xml_output = extract_statements_as_xml(text)
# Python dictionary
dict_output = extract_statements_as_dict(text)
Batch Processing
from statement_extractor import StatementExtractor
extractor = StatementExtractor(device="cuda") # or "cpu"
texts = ["Text 1...", "Text 2...", "Text 3..."]
for text in texts:
result = extractor.extract(text)
print(f"Found {len(result)} statements")
Entity Types
| Type | Description | Example |
|---|---|---|
ORG |
Organizations | Apple Inc., United Nations |
PERSON |
People | Tim Cook, Elon Musk |
GPE |
Geopolitical entities | USA, California, Paris |
LOC |
Non-GPE locations | Mount Everest, Pacific Ocean |
PRODUCT |
Products | iPhone, Model S |
EVENT |
Events | World Cup, CES 2024 |
WORK_OF_ART |
Creative works | Mona Lisa, Game of Thrones |
LAW |
Legal documents | GDPR, Clean Air Act |
DATE |
Dates | 2024, January 15 |
MONEY |
Monetary values | $50 million, €100 |
PERCENT |
Percentages | 25%, 0.5% |
QUANTITY |
Quantities | 500 employees, 1.5 tons |
UNKNOWN |
Unrecognized | (fallback) |
How It Works
This library uses the T5-Gemma 2 statement extraction model with Diverse Beam Search (Vijayakumar et al., 2016):
- Diverse Beam Search: Generates 4+ candidate outputs using beam groups with diversity penalty
- Quality Scoring (v0.2.0): Each triple scored for groundedness in source text
- Beam Merging (v0.2.0): Top beams combined for better coverage
- Embedding Dedup (v0.2.0): Semantic similarity removes near-duplicate predicates
- Predicate Normalization (v0.2.0): Optional taxonomy matching via embeddings
Requirements
- Python 3.10+
- PyTorch 2.0+
- Transformers 4.35+
- Pydantic 2.0+
- sentence-transformers 2.2+ (optional, for embedding features)
- ~2GB VRAM (GPU) or ~4GB RAM (CPU)
Links
License
MIT License - see LICENSE file for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file corp_extractor-0.2.0.tar.gz.
File metadata
- Download URL: corp_extractor-0.2.0.tar.gz
- Upload date:
- Size: 18.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.6.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
cda4285856daae58e37f4f98b9ee625425672704217227c2b0ed82656b65279a
|
|
| MD5 |
43663137b6f9e8b135ebec17f7715580
|
|
| BLAKE2b-256 |
0bcdd69e46a868387c5736764b194aaa72e9ea7a643f28ff40a95df95faf6ef0
|
File details
Details for the file corp_extractor-0.2.0-py3-none-any.whl.
File metadata
- Download URL: corp_extractor-0.2.0-py3-none-any.whl
- Upload date:
- Size: 21.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.6.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
af9199e036f3c6d7e10e65a15140075a6581847e8afa30e8d517adc916e10c12
|
|
| MD5 |
6373029f803e386eec301351c51ba023
|
|
| BLAKE2b-256 |
2a3c5bfa1875edc2bac8c1168e66cecbaed2423add22b9e1e8fc77b5067feadd
|