Skip to main content

Python toolkit for orchestrating WordLift imports and structured data workflows.

Project description

WordLift Python SDK

A Python toolkit for orchestrating WordLift imports: fetch URLs from sitemaps, Google Sheets, or explicit lists, filter out already imported pages, enqueue search console jobs, push RDF graphs, and call the WordLift APIs to import web pages.

Features

  • URL sources: XML sitemaps (with optional regex filtering), Google Sheets (url column), or Python lists.
  • Change detection: skips URLs that are already imported unless OVERWRITE is enabled; re-imports when lastmod is newer.
  • Web page imports: sends URLs to WordLift with embedding requests, output types, retry logic, and pluggable callbacks.
  • Python 3.14 compatibility: retry filters use pydantic_core.ValidationError via the public API.
  • Search Console refresh: triggers analytics imports when top queries are stale.
  • Graph templates: renders .ttl.liquid templates under data/templates with account data and uploads the resulting RDF graphs.
  • Extensible: override protocols via WORDLIFT_OVERRIDE_DIR without changing the library code.

Installation

pip install wordlift-sdk
# or
poetry add wordlift-sdk

Requires Python 3.10–3.14.

Configuration

Settings are read in order: config/default.py (or a custom path you pass to ConfigurationProvider.create), environment variables, then (when available) Google Colab userdata.

Common options:

  • WORDLIFT_KEY (required): WordLift API key.
  • API_URL: WordLift API base URL, defaults to https://api.wordlift.io.
  • SITEMAP_URL: XML sitemap to crawl; SITEMAP_URL_PATTERN optional regex to filter URLs.
  • SHEETS_URL, SHEETS_NAME, SHEETS_SERVICE_ACCOUNT: use a Google Sheet as source; service account points to credentials file.
  • URLS: list of URLs (e.g., ["https://example.com/a", "https://example.com/b"]).
  • OVERWRITE: re-import URLs even if already present (default False).
  • WEB_PAGE_IMPORT_WRITE_STRATEGY: WordLift write strategy (default createOrUpdateModel).
  • EMBEDDING_PROPERTIES: list of schema properties to embed.
  • WEB_PAGE_TYPES: output schema types, defaults to ["http://schema.org/Article"].
  • GOOGLE_SEARCH_CONSOLE: enable/disable Search Console handler (default True).
  • CONCURRENCY: max concurrent handlers, defaults to min(cpu_count(), 4).
  • WORDLIFT_OVERRIDE_DIR: folder containing protocol overrides (default app/overrides).

TLS/SSL

The SDK enforces SSL verification. On macOS it uses the system CA bundle when available and falls back to certifi if needed. You can override the CA bundle path explicitly in code:

from wordlift_sdk.client import ClientConfigurationFactory
from wordlift_sdk.structured_data import CreateRequest

factory = ClientConfigurationFactory(
    key="your-api-key",
    api_url="https://api.wordlift.io",
    ssl_ca_cert="/path/to/ca.pem",
)
configuration = factory.create()

request = CreateRequest(
    url="https://example.com",
    target_type="Thing",
    output_dir=Path("."),
    base_name="structured-data",
    jsonld_path=None,
    yarrml_path=None,
    api_key="your-api-key",
    base_url=None,
    ssl_ca_cert="/path/to/ca.pem",
    debug=False,
    headed=False,
    timeout_ms=30000,
    max_retries=2,
    quality_check=True,
    max_xhtml_chars=40000,
    max_text_node_chars=400,
    max_nesting_depth=2,
    verbose=True,
    validate=True,
    wait_until="networkidle",
)

Note: target_type is used for agent guidance and validation shape selection. The YARRRML materialization pipeline now preserves authored mapping semantics and does not coerce nodes to Review/Thing.

Example config/default.py:

WORDLIFT_KEY = "your-api-key"
SITEMAP_URL = "https://example.com/sitemap.xml"
SITEMAP_URL_PATTERN = r"^https://example.com/article/.*$"
GOOGLE_SEARCH_CONSOLE = True
WEB_PAGE_TYPES = ["http://schema.org/Article"]
EMBEDDING_PROPERTIES = [
    "http://schema.org/headline",
    "http://schema.org/abstract",
    "http://schema.org/text",
]

Running the import workflow

import asyncio
from wordlift_sdk import run_kg_import_workflow

if __name__ == "__main__":
    asyncio.run(run_kg_import_workflow())

The workflow:

  1. Renders and uploads RDF graphs from data/templates/*.ttl.liquid using account info.
  2. Builds the configured URL source and filters out unchanged URLs (unless OVERWRITE).
  3. Sends each URL to WordLift for import with retries and optional Search Console refresh.

You can build components yourself when you need more control:

import asyncio
from wordlift_sdk.container.application_container import ApplicationContainer

async def main():
    container = ApplicationContainer()
    workflow = await container.create_kg_import_workflow()
    await workflow.run()

asyncio.run(main())

Custom callbacks and overrides

Override the web page import callback by placing web_page_import_protocol.py with a WebPageImportProtocol class under WORDLIFT_OVERRIDE_DIR (default app/overrides). The callback receives a WebPageImportResponse and can push to graph_queue or entity_patch_queue.

Templates

Add .ttl.liquid files under data/templates. Templates render with account fields available (e.g., {{ account.dataset_uri }}) and are uploaded before URL handling begins.

Validation

SHACL validation utilities and generated Google Search Gallery shapes are included. When a feature includes both container types (for example ItemList, BreadcrumbList, QAPage, FAQPage, Quiz, ProfilePage, Product, Recipe, Course, Review) and their contained types (ListItem, Question, Answer, Comment, Offer, AggregateOffer, HowToStep, Person, Organization, Rating, AggregateRating, Review, ItemList), the generator scopes the contained constraints under the container properties to avoid enforcing them on unrelated nodes. For Product snippets, offers is scoped as Offer or AggregateOffer, matching Google requirements. The generator also captures "one of" requirements expressed in prose lists and emits sh:or constraints so any listed property satisfies the requirement. Schema.org grammar checks are intentionally permissive and accept URL/text literals for all properties.

Use wordlift_sdk.validation.validate_jsonld_from_url to render a URL with Playwright, extract JSON-LD fragments, and validate them against SHACL shapes.

Playwright is required for URL rendering. After installing dependencies, install the browser binaries:

poetry run playwright install

Structured Data Tokens

YARRRML mappings are now executed directly by morph-kgc native YARRRML support. There is no JS transpile step via yarrrml-parser, and no temporary mapping.ttl conversion artifact in the materialization pipeline.

Customer-authored mappings can use runtime tokens:

  • __XHTML__ for the local XHTML source path used by materialization.
  • __URL__ for canonical page URL injection.
  • __ID__ for callback/import entity IRI injection.

__URL__ resolution order is:

  1. response.web_page.url
  2. explicit url argument passed to materialization

__ID__ resolution source is:

  1. response.id (legacy import callbacks)
  2. existing_web_page_id injected by kg_build scrape callbacks

When unresolved:

  • strict mode (strict_url_token=True): fail fast
  • default non-strict mode: warn and keep __URL__ unchanged
  • __ID__: fail closed with an explicit error

Recommendation: use __ID__ in subject/object IRI positions instead of temporary hardcoded page subjects such as {{ dataset_uri }}/web-pages/page.

Compatibility note: morph-kgc native YARRRML behavior may differ from legacy JS parser behavior for some advanced XPath/function constructs.

When preparing XHTML sources from raw HTML, HtmlConverter strips undeclared namespace prefixes from tag names and removes undeclared prefixed attributes to avoid xml.etree.ElementTree.ParseError: unbound prefix failures in XPath materialization flows. It also removes XML-invalid comments/processing instructions, validates output with xml.etree.ElementTree.fromstring(), and runs a strict fallback sanitation pass before surfacing a context-rich conversion error. Converted XHTML also strips default xmlns declarations so unprefixed XPath selectors (for example .//div, .//h1) work with __XHTML__ sources.

KG Build Module

The SDK now includes a profile-driven cloud mapping module under wordlift_sdk.kg_build.

  • Public module import: wordlift_sdk.kg_build
  • Canonical cloud orchestration path: wordlift_sdk.kg_build.cloud_flow.run_cloud_workflow
  • Supported cloud source modes in canonical path:
    • urls
    • sitemap_url (optional sitemap_url_pattern)
    • sheets_url + sheets_name
  • Postprocessor runner entrypoint: python -m wordlift_sdk.kg_build.postprocessor_runner
  • Persistent postprocessor worker entrypoint: python -m wordlift_sdk.kg_build.postprocessor_worker
  • URL handling parity with legacy workflow:
    • WebPageScrapeUrlHandler is always enabled for kg_build
    • SearchConsoleUrlHandler is enabled when GOOGLE_SEARCH_CONSOLE=True (default)
  • Postprocessor manifest precedence:
  1. profiles/<profile>/postprocessors.toml (exclusive when present)
  2. fallback profiles/_base/postprocessors.toml
  3. otherwise no postprocessors
  • Execution is manifest-based only (hard cutover): no legacy .py or *.command.toml discovery.
  • During callback patch preparation, the SDK annotates all URI-subject nodes in the generated graph with seovoc:source "web-page-import" (blank nodes are not annotated).
  • Postprocessor runtime mode:
    • profiles.<profile>.postprocessor_runtime overrides _base.
    • _base.postprocessor_runtime is used when profile value is missing.
    • SDK default is persistent.
    • persistent keeps one long-lived subprocess per configured class and reuses it across callbacks.
  • Template exports inheritance:
    • supported files: exports.toml, exports.toml.j2, exports.toml.liquid
    • lookup locations: profile root (profiles/_base, profiles/<profile>) and templates directories (backward compatible)
    • precedence: _base first, selected profile second; selected keys override _base
  • Postprocessor authoring contract:
    • supported method: process_graph(self, graph, context)
    • supported return values: Graph, None, or an awaitable resolving to Graph | None
    • in persistent mode, each worker instance processes one job at a time (callbacks can still run concurrently across different workers/classes)
    • context.profile contains the resolved/interpolated profile object (including inherited fields)
    • context.account_key contains the runtime API key and is required for postprocessor execution
    • keep context.account as the clean /me account object (no injected key)
    • API base URL should be read from context.profile["settings"]["api_url"] (defaults to https://api.wordlift.io)
  • Run-level sync KPIs:
    • ProfileImportProtocol.get_kpi_summary() returns:
      • graph totals: total_entities, type_assertions_total, property_assertions_total
      • graph breakdowns: entities_by_type, properties_by_predicate
      • validation totals: validation.total, validation.pass, validation.fail (when validation is enabled)
      • validation breakdowns: validation.warnings.{count,sources}, validation.errors.{count,sources} (when validation is enabled)
    • Validation can be enabled per profile with:
      • shacl_validate_sync / SHACL_VALIDATE_SYNC (true|false, default false)
      • shacl_validate_mode / SHACL_VALIDATE_MODE (warn|strict, default warn)
      • shacl_shape_specs / SHACL_SHAPE_SPECS (optional list or comma-separated shape names/files)
    • run_cloud_workflow(..., on_kpi=...) emits the final KPI summary once at run end (including failed runs with partial data).
    • run_cloud_workflow(..., on_progress=...) emits per-graph progress payloads during sync, including graph metrics and (when enabled) validation summaries.
    • run_cloud_workflow(..., on_info=...) remains supported and can be used together with on_progress/on_kpi.
    • final KPI payload uses validation = null when SHACL sync validation is disabled.
    • migration notes and deprecation window for non-canonical behavior are documented in docs/kg_build_cloud_workflow_migration.md.

Ingestion Module

The SDK now includes a reusable 2-axis ingestion module under wordlift_sdk.ingestion:

  • Axis A (INGEST_SOURCE): urls|sitemap|sheets|local
  • Axis B (INGEST_LOADER): simple|proxy|playwright|premium_scraper|web_scrape_api|passthrough

Default loader is web_scrape_api. If an item already includes embedded HTML and INGEST_PASSTHROUGH_WHEN_HTML=True (default), ingestion uses passthrough before network loaders.

INGEST_SOURCE and INGEST_LOADER are required. Legacy resolver fallback from WEB_PAGE_IMPORT_MODE/WEB_PAGE_IMPORT_TIMEOUT is removed. Playwright ingestion failures keep stable top-level code/message and expose root-cause diagnostics (root_exception_type, root_exception_message, phase, url, wait_until, timeout_ms, headless) in ingest.item_failed.meta. When ingestion is triggered from async workflows, the Playwright loader avoids executing Sync API calls directly on the active asyncio loop thread. Default Playwright wait mode for ingestion is domcontentloaded; navigation timeouts now return partial page HTML when available instead of failing immediately. Bridge handler failures (IngestionWebPageScrapeUrlHandler) now preserve existing loader code/message text and append parseable diagnostics from ingest.item_failed.meta when available.

Quick start:

from wordlift_sdk.ingestion import run_ingestion

result = run_ingestion(
    {
        "INGEST_SOURCE": "urls",
        "URLS": ["https://example.com"],
        "INGEST_LOADER": "web_scrape_api",
        "WORDLIFT_KEY": "your-api-key",
    }
)

Testing

poetry install --with dev
poetry run pytest

Documentation

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

wordlift_sdk-6.0.4.tar.gz (318.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

wordlift_sdk-6.0.4-py3-none-any.whl (369.2 kB view details)

Uploaded Python 3

File details

Details for the file wordlift_sdk-6.0.4.tar.gz.

File metadata

  • Download URL: wordlift_sdk-6.0.4.tar.gz
  • Upload date:
  • Size: 318.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for wordlift_sdk-6.0.4.tar.gz
Algorithm Hash digest
SHA256 aaf2fd8e85081c2253b97b4d6b1cab1f9a88c2fd071819d0397461235e520041
MD5 54d36d1e681f9feb303741e0f8768670
BLAKE2b-256 0b9e7578f8aa639a75ae0031883eb4351b58792483542b9752811630fa7c4292

See more details on using hashes here.

Provenance

The following attestation bundles were made for wordlift_sdk-6.0.4.tar.gz:

Publisher: ci.yml on wordlift/python-sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file wordlift_sdk-6.0.4-py3-none-any.whl.

File metadata

  • Download URL: wordlift_sdk-6.0.4-py3-none-any.whl
  • Upload date:
  • Size: 369.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for wordlift_sdk-6.0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 8613404315454fce4236a1892fd6c7eed53f7128b7b9f517d89c4470577ca1b0
MD5 8da871a52a6f48c1a13ea1738f848660
BLAKE2b-256 ffebdb1ebe7ee139b0ac9e4b5ec3988aae62ca6f9437e9e6ec592b685cc07e1b

See more details on using hashes here.

Provenance

The following attestation bundles were made for wordlift_sdk-6.0.4-py3-none-any.whl:

Publisher: ci.yml on wordlift/python-sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page