Skip to main content

HTML data extraction library

Project description

Pickaxe

PyPI - Python Version

Pickaxe is a Python package for structured data extraction from HTML documents. It provides a simple and intuitive API for parsing HTML documents, and automatically extracting structured data from them.

Features

  • Written in Rust: Pickaxe is written in Rust, which makes it fast and memory-efficient.
  • Zero-Copy Parsing: Pickaxe uses the tl crate for zero-copy parsing of HTML documents.
  • Data Maps: Pickaxe can automatically generate CSS selectors for structured data extraction using Data Maps.
  • CSS Selectors & XPath: Pickaxe supports both CSS selectors and (simple) XPath expressions for querying HTML documents.

Quick Start

Installation

pip install python-pickaxe

Basic Usage

from pickaxe import HtmlDocument

# Parse an HTML document
document = HtmlDocument.from_str("<html><body><h1>Hello, World!</h1></body></html>")

# Access elements using CSS selectors or XPath expressions
heading = document.find("h1")
print(heading.inner_text)  # Output: Hello, World!

heading = document.find_xpath("//h1")
print(heading.inner_text)  # Output: Hello, World!

Data Maps

Data Maps are a powerful feature of Pickaxe that allow you to automatically find the best (most concise) CSS selectors for an HTML document based on samples.

from httpx import AsyncClient
from pickaxe import Attribute, HtmlDocument, generate_data_map

# We first generate a data map using a sample HTML document, and
# examples of the data we want to extract
async with AsyncClient() as client:
    response = await client.get("http://quotes.toscrape.com/author/Albert-Einstein/")
    document = HtmlDocument.from_str(response.text)

    # In this example we want to extract the Name and Birth date of the authors.
    # The HTML documents that are used as examples must have a corresponding sample in each attribute, even if the expected value
    # is None.
    # Note: You can specify the amount of iterations to run the algorithm, but typically 1-3 is enough.
    data_map = generate_data_map(
        [document],
        [
            Attribute("name", ["Albert Einstein"]),
            Attribute("birth_date", ["March 14, 1879"]),
        ]
    )

# From this data map we can extract data from other HTML documents
async with AsyncClient() as client:
    response = await client.get("https://quotes.toscrape.com/author/J-K-Rowling/")
    document = HtmlDocument.from_str(response.text)

    data = data_map.extract(document)
    print(data.to_dict()) # Output: {'name': 'J.K. Rowling', 'birth_date': 'July 31, 1965'}

# You can serialize and deserialize the data map to JSON.
data_map_json = data_map.to_json()
data_map = DataMap.from_json(data_map_json)

# The result of `extract()` is a `StructuredData` object, which can be converted to a dictionary or JSON.
print(data.to_dict())
print(data.to_json())
print(StructuredData.from_json(my_json).to_dict())

License

This project is licensed under MIT License.

Support & Feedback

If you encounter any issues or have feedback, please open an issue. We'd love to hear from you!

Made with ❤️ by Emergent Methods

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

python_pickaxe-0.1.1-cp39-abi3-manylinux_2_34_x86_64.whl (1.5 MB view details)

Uploaded CPython 3.9+manylinux: glibc 2.34+ x86-64

File details

Details for the file python_pickaxe-0.1.1-cp39-abi3-manylinux_2_34_x86_64.whl.

File metadata

File hashes

Hashes for python_pickaxe-0.1.1-cp39-abi3-manylinux_2_34_x86_64.whl
Algorithm Hash digest
SHA256 ed483a0b7f33156e0064ec84d53aac59ab9d02966f83f1b1ae7f2bec11496e90
MD5 a8f9f986ab41c053985af0f650e4b9f4
BLAKE2b-256 640c912c7a1a1e608e5d082a76993cc4e19b450ac460a8ce12acc0d7177a4af7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page