Python toolkit for orchestrating WordLift imports and structured data workflows.
Project description
WordLift Python SDK
A Python toolkit for orchestrating WordLift imports: fetch URLs from sitemaps, Google Sheets, or explicit lists, filter out already imported pages, enqueue search console jobs, push RDF graphs, and call the WordLift APIs to import web pages.
Features
- URL sources: XML sitemaps (with optional regex filtering), Google Sheets (
urlcolumn), or Python lists. - Change detection: skips URLs that are already imported unless
OVERWRITEis enabled; re-imports whenlastmodis newer. - Web page imports: sends URLs to WordLift with embedding requests, output types, retry logic, and pluggable callbacks.
- Python 3.14 compatibility: retry filters use
pydantic_core.ValidationErrorvia the public API. - Search Console refresh: triggers analytics imports when top queries are stale.
- Graph templates: renders
.ttl.liquidtemplates underdata/templateswith account data and uploads the resulting RDF graphs. - Extensible: override protocols via
WORDLIFT_OVERRIDE_DIRwithout changing the library code.
Installation
pip install wordlift-sdk
# or
poetry add wordlift-sdk
Requires Python 3.10–3.14.
Configuration
Settings are read in order: config/default.py (or a custom path you pass to ConfigurationProvider.create), environment variables, then (when available) Google Colab userdata.
Common options:
WORDLIFT_KEY(required): WordLift API key.API_URL: WordLift API base URL, defaults tohttps://api.wordlift.io.SITEMAP_URL: XML sitemap to crawl;SITEMAP_URL_PATTERNoptional regex to filter URLs.SHEETS_URL,SHEETS_NAME,SHEETS_SERVICE_ACCOUNT: use a Google Sheet as source; service account points to credentials file.URLS: list of URLs (e.g.,["https://example.com/a", "https://example.com/b"]).OVERWRITE: re-import URLs even if already present (defaultFalse).WEB_PAGE_IMPORT_WRITE_STRATEGY: WordLift write strategy (defaultcreateOrUpdateModel).EMBEDDING_PROPERTIES: list of schema properties to embed.WEB_PAGE_TYPES: output schema types, defaults to["http://schema.org/Article"].GOOGLE_SEARCH_CONSOLE: enable/disable Search Console handler (defaultTrue).CONCURRENCY: max concurrent handlers, defaults tomin(cpu_count(), 4).WORDLIFT_OVERRIDE_DIR: folder containing protocol overrides (defaultapp/overrides).
TLS/SSL
The SDK enforces SSL verification. On macOS it uses the system CA bundle when available and falls back to certifi if needed. You can override the CA bundle path explicitly in code:
from wordlift_sdk.client import ClientConfigurationFactory
from wordlift_sdk.structured_data import CreateRequest
factory = ClientConfigurationFactory(
key="your-api-key",
api_url="https://api.wordlift.io",
ssl_ca_cert="/path/to/ca.pem",
)
configuration = factory.create()
request = CreateRequest(
url="https://example.com",
target_type="Thing",
output_dir=Path("."),
base_name="structured-data",
jsonld_path=None,
yarrml_path=None,
api_key="your-api-key",
base_url=None,
ssl_ca_cert="/path/to/ca.pem",
debug=False,
headed=False,
timeout_ms=30000,
max_retries=2,
quality_check=True,
max_xhtml_chars=40000,
max_text_node_chars=400,
max_nesting_depth=2,
verbose=True,
validate=True,
wait_until="networkidle",
)
Note: target_type is used for agent guidance and validation shape selection. The YARRRML materialization pipeline now preserves authored mapping semantics and does not coerce nodes to Review/Thing.
Example config/default.py:
WORDLIFT_KEY = "your-api-key"
SITEMAP_URL = "https://example.com/sitemap.xml"
SITEMAP_URL_PATTERN = r"^https://example.com/article/.*$"
GOOGLE_SEARCH_CONSOLE = True
WEB_PAGE_TYPES = ["http://schema.org/Article"]
EMBEDDING_PROPERTIES = [
"http://schema.org/headline",
"http://schema.org/abstract",
"http://schema.org/text",
]
Running the import workflow
import asyncio
from wordlift_sdk import run_kg_import_workflow
if __name__ == "__main__":
asyncio.run(run_kg_import_workflow())
The workflow:
- Renders and uploads RDF graphs from
data/templates/*.ttl.liquidusing account info. - Builds the configured URL source and filters out unchanged URLs (unless
OVERWRITE). - Sends each URL to WordLift for import with retries and optional Search Console refresh.
You can build components yourself when you need more control:
import asyncio
from wordlift_sdk.container.application_container import ApplicationContainer
async def main():
container = ApplicationContainer()
workflow = await container.create_kg_import_workflow()
await workflow.run()
asyncio.run(main())
Custom callbacks and overrides
Override the web page import callback by placing web_page_import_protocol.py with a WebPageImportProtocol class under WORDLIFT_OVERRIDE_DIR (default app/overrides). The callback receives a WebPageImportResponse and can push to graph_queue or entity_patch_queue.
Templates
Add .ttl.liquid files under data/templates. Templates render with account fields available (e.g., {{ account.dataset_uri }}) and are uploaded before URL handling begins.
Validation
SHACL validation utilities and generated Google Search Gallery shapes are included. When a feature includes both container types (for example ItemList, BreadcrumbList, QAPage, FAQPage, Quiz, ProfilePage, Product, Recipe, Course, Review) and their contained types (ListItem, Question, Answer, Comment, Offer, AggregateOffer, HowToStep, Person, Organization, Rating, AggregateRating, Review, ItemList), the generator scopes the contained constraints under the container properties to avoid enforcing them on unrelated nodes. For Product snippets, offers is scoped as Offer or AggregateOffer, matching Google requirements. The generator also captures "one of" requirements expressed in prose lists and emits sh:or constraints so any listed property satisfies the requirement. Schema.org grammar checks are intentionally permissive and accept URL/text literals for all properties.
Use wordlift_sdk.validation.validate_jsonld_from_url to render a URL with Playwright, extract JSON-LD fragments, and validate them against SHACL shapes.
Playwright is required for URL rendering. After installing dependencies, install the browser binaries:
poetry run playwright install
Structured Data Tokens
YARRRML mappings are now executed directly by morph-kgc native YARRRML support.
There is no JS transpile step via yarrrml-parser, and no temporary mapping.ttl
conversion artifact in the materialization pipeline.
Customer-authored mappings can use runtime tokens:
__XHTML__for the local XHTML source path used by materialization.__URL__for canonical page URL injection.__ID__for callback/import entity IRI injection.
__URL__ resolution order is:
response.web_page.url- explicit
urlargument passed to materialization
__ID__ resolution source is:
response.id(legacy import callbacks)existing_web_page_idinjected bykg_buildscrape callbacks
When unresolved:
- strict mode (
strict_url_token=True): fail fast - default non-strict mode: warn and keep
__URL__unchanged __ID__: fail closed with an explicit error
Recommendation: use __ID__ in subject/object IRI positions instead of
temporary hardcoded page subjects such as {{ dataset_uri }}/web-pages/page.
Compatibility note: morph-kgc native YARRRML behavior may differ from legacy
JS parser behavior for some advanced XPath/function constructs.
When preparing XHTML sources from raw HTML, HtmlConverter strips undeclared
namespace prefixes from tag names and removes undeclared prefixed attributes to
avoid xml.etree.ElementTree.ParseError: unbound prefix failures in XPath
materialization flows.
It also removes XML-invalid comments/processing instructions, validates output
with xml.etree.ElementTree.fromstring(), and runs a strict fallback sanitation
pass before surfacing a context-rich conversion error.
Converted XHTML also strips default xmlns declarations so unprefixed XPath
selectors (for example .//div, .//h1) work with __XHTML__ sources.
KG Build Module
The SDK now includes a profile-driven cloud mapping module under wordlift_sdk.kg_build.
- Public module import:
wordlift_sdk.kg_build - Canonical cloud orchestration path:
wordlift_sdk.kg_build.cloud_flow.run_cloud_workflow - Supported cloud source modes in canonical path:
urlssitemap_url(optionalsitemap_url_pattern)sheets_url+sheets_name
- Postprocessor runner entrypoint:
python -m wordlift_sdk.kg_build.postprocessor_runner - Persistent postprocessor worker entrypoint:
python -m wordlift_sdk.kg_build.postprocessor_worker - URL handling parity with legacy workflow:
WebPageScrapeUrlHandleris always enabled forkg_buildSearchConsoleUrlHandleris enabled whenGOOGLE_SEARCH_CONSOLE=True(default)
- Postprocessor manifest precedence:
profiles/<profile>/postprocessors.toml(exclusive when present)- fallback
profiles/_base/postprocessors.toml - otherwise no postprocessors
- Execution is manifest-based only (hard cutover): no legacy
.pyor*.command.tomldiscovery. - During callback patch preparation, the SDK annotates all URI-subject nodes in the generated graph with
seovoc:source "web-page-import"(blank nodes are not annotated). - Postprocessor runtime mode:
profiles.<profile>.postprocessor_runtimeoverrides_base._base.postprocessor_runtimeis used when profile value is missing.- SDK default is
persistent. persistentkeeps one long-lived subprocess per configured class and reuses it across callbacks.
- Template exports inheritance:
- supported files:
exports.toml,exports.toml.j2,exports.toml.liquid - lookup locations: profile root (
profiles/_base,profiles/<profile>) and templates directories (backward compatible) - precedence:
_basefirst, selected profile second; selected keys override_base
- supported files:
- Postprocessor authoring contract:
- supported method:
process_graph(self, graph, context) - supported return values:
Graph,None, or an awaitable resolving toGraph | None - in persistent mode, each worker instance processes one job at a time (callbacks can still run concurrently across different workers/classes)
context.profilecontains the resolved/interpolated profile object (including inherited fields)context.account_keycontains the runtime API key and is required for postprocessor execution- keep
context.accountas the clean/meaccount object (no injected key) - API base URL should be read from
context.profile["settings"]["api_url"](defaults tohttps://api.wordlift.io)
- supported method:
- Run-level sync KPIs:
ProfileImportProtocol.get_kpi_summary()returns:- graph totals:
total_entities,type_assertions_total,property_assertions_total - graph breakdowns:
entities_by_type,properties_by_predicate - validation totals:
validation.total,validation.pass,validation.fail(when validation is enabled) - validation breakdowns:
validation.warnings.{count,sources},validation.errors.{count,sources}(when validation is enabled)
- graph totals:
- Validation can be enabled per profile with:
shacl_validate_sync/SHACL_VALIDATE_SYNC(true|false, defaultfalse)shacl_validate_mode/SHACL_VALIDATE_MODE(warn|strict, defaultwarn)shacl_shape_specs/SHACL_SHAPE_SPECS(optional list or comma-separated shape names/files)
run_cloud_workflow(..., on_kpi=...)emits the final KPI summary once at run end (including failed runs with partial data).run_cloud_workflow(..., on_progress=...)emits per-graph progress payloads during sync, including graph metrics and (when enabled) validation summaries.run_cloud_workflow(..., on_info=...)remains supported and can be used together withon_progress/on_kpi.- final KPI payload uses
validation = nullwhen SHACL sync validation is disabled. - migration notes and deprecation window for non-canonical behavior are documented in
docs/kg_build_cloud_workflow_migration.md.
Ingestion Module
The SDK now includes a reusable 2-axis ingestion module under wordlift_sdk.ingestion:
- Axis A (
INGEST_SOURCE):urls|sitemap|sheets|local - Axis B (
INGEST_LOADER):simple|proxy|playwright|premium_scraper|web_scrape_api|passthrough
Default loader is web_scrape_api. If an item already includes embedded HTML and
INGEST_PASSTHROUGH_WHEN_HTML=True (default), ingestion uses passthrough
before network loaders.
INGEST_SOURCE and INGEST_LOADER are required. Legacy resolver fallback from
WEB_PAGE_IMPORT_MODE/WEB_PAGE_IMPORT_TIMEOUT is removed.
Playwright ingestion failures keep stable top-level code/message and expose root-cause
diagnostics (root_exception_type, root_exception_message, phase, url,
wait_until, timeout_ms, headless) in ingest.item_failed.meta.
When ingestion is triggered from async workflows, the Playwright loader avoids executing
Sync API calls directly on the active asyncio loop thread.
Default Playwright wait mode for ingestion is domcontentloaded; navigation timeouts now
return partial page HTML when available instead of failing immediately.
Bridge handler failures (IngestionWebPageScrapeUrlHandler) now preserve existing
loader code/message text and append parseable diagnostics from ingest.item_failed.meta
when available.
Quick start:
from wordlift_sdk.ingestion import run_ingestion
result = run_ingestion(
{
"INGEST_SOURCE": "urls",
"URLS": ["https://example.com"],
"INGEST_LOADER": "web_scrape_api",
"WORDLIFT_KEY": "your-api-key",
}
)
Testing
poetry install --with dev
poetry run pytest
Documentation
- Documentation Index: Quick index for all user and agent-facing docs.
- Ingestion Pipeline: 2-axis source/loader architecture and compatibility rules.
- Public Entry Points: Task-oriented inventory of client APIs by module file.
- Google Sheets Lookup: Utility for O(1) lookups from Google Sheets.
- Web Page Import: Configure fetch options, proxies, and JS rendering.
- KG Build KPI + Validation Callbacks: Client contract and payload examples for
on_progressandon_kpi. - KG Build Cloud Workflow Migration: Canonical
run_cloud_workflowmigration steps, deprecation window, and source/runtime expectations. - Worai SDK Integration Contract v6: Version-locked implementation contract for worai integrations on SDK 6.x.
- Structured Data: Structured data architecture and pipeline behavior.
- Canonical ID Policy: Scope strategy, deterministic type precedence, and URL-preserving rewrite guarantees.
- Customer Project Contract: Profile repo contract and manifest-based postprocessor runtime.
- Structured Data Spec: Internal technical details for runtime placeholder resolution.
- Ingestion Pipeline Spec: Internal source/loader contract and precedence rules.
- Profile Config Spec: Profile inheritance, environment interpolation, and manifest postprocessor contract.
- Pipeline Architecture Spec:
kg_buildruntime flow and callback architecture. - Migration Guide: Breaking changes for structured data refactor.
- Changelog: Versioned release notes.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file wordlift_sdk-6.0.4.tar.gz.
File metadata
- Download URL: wordlift_sdk-6.0.4.tar.gz
- Upload date:
- Size: 318.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
aaf2fd8e85081c2253b97b4d6b1cab1f9a88c2fd071819d0397461235e520041
|
|
| MD5 |
54d36d1e681f9feb303741e0f8768670
|
|
| BLAKE2b-256 |
0b9e7578f8aa639a75ae0031883eb4351b58792483542b9752811630fa7c4292
|
Provenance
The following attestation bundles were made for wordlift_sdk-6.0.4.tar.gz:
Publisher:
ci.yml on wordlift/python-sdk
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
wordlift_sdk-6.0.4.tar.gz -
Subject digest:
aaf2fd8e85081c2253b97b4d6b1cab1f9a88c2fd071819d0397461235e520041 - Sigstore transparency entry: 991531568
- Sigstore integration time:
-
Permalink:
wordlift/python-sdk@1c8d3b24d92fd69106e11acdb677f7beb2a7fc58 -
Branch / Tag:
refs/tags/6.0.4 - Owner: https://github.com/wordlift
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
ci.yml@1c8d3b24d92fd69106e11acdb677f7beb2a7fc58 -
Trigger Event:
push
-
Statement type:
File details
Details for the file wordlift_sdk-6.0.4-py3-none-any.whl.
File metadata
- Download URL: wordlift_sdk-6.0.4-py3-none-any.whl
- Upload date:
- Size: 369.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8613404315454fce4236a1892fd6c7eed53f7128b7b9f517d89c4470577ca1b0
|
|
| MD5 |
8da871a52a6f48c1a13ea1738f848660
|
|
| BLAKE2b-256 |
ffebdb1ebe7ee139b0ac9e4b5ec3988aae62ca6f9437e9e6ec592b685cc07e1b
|
Provenance
The following attestation bundles were made for wordlift_sdk-6.0.4-py3-none-any.whl:
Publisher:
ci.yml on wordlift/python-sdk
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
wordlift_sdk-6.0.4-py3-none-any.whl -
Subject digest:
8613404315454fce4236a1892fd6c7eed53f7128b7b9f517d89c4470577ca1b0 - Sigstore transparency entry: 991531571
- Sigstore integration time:
-
Permalink:
wordlift/python-sdk@1c8d3b24d92fd69106e11acdb677f7beb2a7fc58 -
Branch / Tag:
refs/tags/6.0.4 - Owner: https://github.com/wordlift
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
ci.yml@1c8d3b24d92fd69106e11acdb677f7beb2a7fc58 -
Trigger Event:
push
-
Statement type: