schema-driven HTML extractor
Project description
scrapling-schema
Schema-driven HTML extractor. Define extraction specs in Python (with full IDE type hints) or YAML, and get structured JSON out.
Install
pip install scrapling-schema
Requirements
- Python >= 3.10
- scrapling >= 0.4
- PyYAML >= 6.0
Python API
Python type spec (recommended)
from scrapling_schema import Schema, Field, Options, Clear, RegexSub
spec = Schema(
options=Options(clear=Clear(remove_tags=["script", "style"])),
fields={
"products": Field(
css=".product",
type="array<object>",
fields={
"sku": Field(css="SELF", type="string", attr="data-sku"),
"name": Field(css=".name", type="string"),
"url": Field(css="a.link", type="string", attr="href"),
"price": Field(css=".price", type="number", transform=[
RegexSub(pattern=r"[^0-9.]+"),
]),
"tags": Field(css=".tags li", type="array<string>"),
},
)
},
)
result = spec.extract(html)
json_schema = spec.json_schema(title="Products")
Boolean fields (type: "boolean")
Boolean output is derived from type, not a transform. The extractor coerces common truthy/falsey values:
- truthy:
"true","t","yes","y","on","1"(case-insensitive, surrounding whitespace ignored) - falsey:
"false","f","no","n","off","0" - numbers:
1→True,0→False(other numbers becomeNone)
Python example:
from scrapling_schema import Schema, Field
html = "<span class='in-stock'> yes </span>"
spec = Schema(fields={"in_stock": Field(css=".in-stock", type="boolean")})
data = spec.extract(html)
assert data["in_stock"] is True
If you want invalid/missing values to fail fast, set nullable=False:
from scrapling_schema import Schema, Field, ValidationError
html = "<span class='in-stock'> maybe </span>"
spec = Schema(fields={"in_stock": Field(css=".in-stock", type="boolean", nullable=False)})
try:
spec.extract(html)
except ValidationError:
pass
YAML spec
options:
clear:
remove_tags: ["script", "style"]
fields:
products:
css: ".product"
type: "array<object>"
fields:
sku:
css: "SELF"
type: "string"
attr: "data-sku"
name:
css: ".name"
type: "string"
price:
css: ".price"
type: "number"
transform:
- regex_sub: { pattern: "[^0-9.]+", repl: "" }
in_stock:
css: ".in-stock"
type: "boolean"
from scrapling_schema import extract_from_yaml
result = extract_from_yaml(html, yaml_spec)
CLI
scrapling-schema --spec spec.yml --html-file page.html
scrapling-schema --spec spec.yml --schema
Field reference
| param | type | description |
|---|---|---|
css |
str |
CSS selector. Use "SELF" to select the context node itself |
attr |
str |
Extract an attribute value (or special "innerHTML") |
type |
str |
Output type: `"string" |
nullable |
bool |
If false, missing values raise ValidationError |
defaultValue |
any |
Fallback value used when the extracted value is empty |
fields |
dict |
Nested fields (for object / array<object>) |
transform |
list |
Transform pipeline (see below) |
callback |
callable |
Field-level post-processing hook (Python API only) |
required |
bool |
Raise ValidationError if value is empty |
Notes:
typeis required for every field.- Arrays must use
type: "array<...>"(noitems:and nolist:). attrsupports special values:"innerHTML": extract HTML string from the selected node."ownText": extract direct text for the selected node (excludes descendant text).
Transform reference
| transform | shorthand | description |
|---|---|---|
RegexSub(pattern, repl) |
— | Regex substitution |
Split(delimiter) |
— | Split string into array items (requires type:"array<...>") |
Notes:
- String outputs are stripped automatically (no transform needed).
- Use field-level
defaultValuefor fallbacks (defaults are not supported inside transforms).
When to use transform vs callback
Both are meant for post-processing, but they work at different levels and have different ergonomics.
Use transform for value-centric pipelines
Good fit when you want a predictable, reusable pipeline on a single extracted value (e.g., regex cleanup, split).
Order of operations (scalar fields):
- Extract raw string
- Apply
transformpipeline - Apply
typecoercion (number/integer/boolean) - Apply
callback(if any)
Example: remove currency symbols before coercing to number:
from scrapling_schema import Schema, Field, RegexSub
spec = Schema(
fields={
"price": Field(
css=".price",
type="number",
transform=[RegexSub(pattern=r"[^0-9.]+", repl="")],
)
}
)
data = spec.extract(html)
Use callback for whole-field logic (filtering, sorting, aggregation)
callback receives the final extracted value for the field:
- scalar field → the scalar value (
str|int|float|bool|None) array<...>field → the whole listobjectfield → the whole dict
This is a better fit for list-level operations or aggregations.
Example: filter a list of objects (keep only items you care about):
from scrapling_schema.types import Schema, Field
def keep_only_a(items: list[dict]) -> list[dict]:
return [item for item in items if "A" in item["name"]]
spec = Schema(
fields={
"products": Field(
css=".item",
type="array<object>",
callback=keep_only_a,
fields={
"name": Field(css=".name", type="string"),
},
)
}
)
data = spec.extract(html)
array<object> special case: transform is per-item
For type: "array<object>", transform is applied to each extracted object (each list element).
If a transform returns None, the item is dropped from the list.
from scrapling_schema import extract
def drop_product_a(item: dict) -> dict | None:
return None if item.get("name") == "Product A" else item
spec = {
"fields": {
"products": {
"css": ".item",
"type": "array<object>",
"transform": [drop_product_a],
"fields": {"name": {"css": ".name", "type": "string"}},
}
}
}
data = extract(html, spec)
YAML note
YAML specs support only the built-in transform steps (e.g., regex_sub, split).
Python callables (transform: [my_fn] / callback: my_fn) are only supported via the Python API (typed Schema/Field or a Python dict spec), not via YAML text.
Testing
Install the dev dependencies (in a virtualenv) and run the test suite:
python -m pip install -e ".[dev]"
python -m pytest
Run a single test file:
python -m pytest tests/test_extractor.py
License
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file scrapling_schema-1.1.2.tar.gz.
File metadata
- Download URL: scrapling_schema-1.1.2.tar.gz
- Upload date:
- Size: 21.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
726e5fff4285f0954582b2d8618594e3757728ef2f288bf49d3060694fbe2079
|
|
| MD5 |
4a95e7c569f7abf77c52626a07d03b14
|
|
| BLAKE2b-256 |
f6bd4599a49fd76d847cfc2f0b42b24d86e96b3c90759ed0e3378a3d327638d6
|
Provenance
The following attestation bundles were made for scrapling_schema-1.1.2.tar.gz:
Publisher:
publish.yml on aimscrape/scrapling-schema
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
scrapling_schema-1.1.2.tar.gz -
Subject digest:
726e5fff4285f0954582b2d8618594e3757728ef2f288bf49d3060694fbe2079 - Sigstore transparency entry: 1040096413
- Sigstore integration time:
-
Permalink:
aimscrape/scrapling-schema@bd9559059ce7f054eb6cd21a0ae70df10d13569e -
Branch / Tag:
refs/tags/v1.1.2 - Owner: https://github.com/aimscrape
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@bd9559059ce7f054eb6cd21a0ae70df10d13569e -
Trigger Event:
release
-
Statement type:
File details
Details for the file scrapling_schema-1.1.2-py3-none-any.whl.
File metadata
- Download URL: scrapling_schema-1.1.2-py3-none-any.whl
- Upload date:
- Size: 14.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
01cfc7790433144a9d63d8093483aa16ae0f57345c864b74ae93ae8c89e98529
|
|
| MD5 |
7d9af8be99250ebd475644840f93c855
|
|
| BLAKE2b-256 |
fcdbd6aa69f0857b8d8745709a9321c4b62b1f5786ecd4ab2b27d090f4bd5d7d
|
Provenance
The following attestation bundles were made for scrapling_schema-1.1.2-py3-none-any.whl:
Publisher:
publish.yml on aimscrape/scrapling-schema
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
scrapling_schema-1.1.2-py3-none-any.whl -
Subject digest:
01cfc7790433144a9d63d8093483aa16ae0f57345c864b74ae93ae8c89e98529 - Sigstore transparency entry: 1040096463
- Sigstore integration time:
-
Permalink:
aimscrape/scrapling-schema@bd9559059ce7f054eb6cd21a0ae70df10d13569e -
Branch / Tag:
refs/tags/v1.1.2 - Owner: https://github.com/aimscrape
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@bd9559059ce7f054eb6cd21a0ae70df10d13569e -
Trigger Event:
release
-
Statement type: