Skip to main content

Python port of Boilerpipe, for HTML boilerplate removal and text extraction

Project description

BoilerPy3

build

About

BoilerPy3 is a native Python port of Christian Kohlschütter's Boilerpipe library, released under the Apache 2.0 Licence.

This package is based on sammyer's BoilerPy, specifically mercuree's Python3-compatible fork. This fork updates the codebase to be more Pythonic (proper attribute access, docstrings, type-hinting, snake case, etc.) and make use Python 3.6 features (f-strings), in addition to switching testing frameworks from Unittest to PyTest.

Note: This package is based on Boilerpipe 1.2 (at or before this commit), as that's when the code was originally ported to Python. I experimented with updating the code to match Boilerpipe 1.3, however because it performed worse in my tests, I ultimately decided to leave it at 1.2-equivalent.

Installation

To install the latest version from PyPI, execute:

pip install boilerpy3

If you'd like to try out any unreleased features you can install directly from GitHub like so:

pip install git+https://github.com/jmriebold/BoilerPy3

Usage

Text Extraction

The top-level interfaces are the Extractors. Use the get_content() methods to extract the filtered text.

from boilerpy3 import extractors

extractor = extractors.ArticleExtractor()

# From a URL
content = extractor.get_content_from_url('http://example.com/')

# From a file
content = extractor.get_content_from_file('tests/test.html')

# From raw HTML
content = extractor.get_content('<html><body><h1>Example</h1></body></html>')

Marked HTML Extraction

To extract the HTML chunks containing filtered text, use the get_marked_html() methods.

from boilerpy3 import extractors

extractor = extractors.ArticleExtractor()

# From a URL
content = extractor.get_marked_html_from_url('http://example.com/')

# From a file
content = extractor.get_marked_html_from_file('tests/test.html')

# From raw HTML
content = extractor.get_marked_html('<html><body><h1>Example</h1></body></html>')

Other

Alternatively, use get_doc() to return a Boilerpipe document from which you can get more detailed information.

from boilerpy3 import extractors

extractor = extractors.ArticleExtractor()

doc = extractor.get_doc_from_url('http://example.com/')
content = doc.content
title = doc.title

Extractors

All extractors have a raise_on_failure parameter (defaults to True). When set to False, the Extractor will handle exceptions raised during text extraction and return any text that was successfully extracted. Leaving this at the default setting may be useful if you want to fall back to another algorithm in the event of an error.

DefaultExtractor

Usually worse than ArticleExtractor, but simpler/no heuristics. A quite generic full-text extractor.

ArticleExtractor

A full-text extractor which is tuned towards news articles. In this scenario it achieves higher accuracy than DefaultExtractor. Works very well for most types of Article-like HTML.

ArticleSentencesExtractor

A full-text extractor which is tuned towards extracting sentences from news articles.

LargestContentExtractor

A full-text extractor which extracts the largest text component of a page. For news articles, it may perform better than the DefaultExtractor but usually worse than ArticleExtractor

CanolaExtractor

A full-text extractor trained on krdwrd Canola. Works well with SimpleEstimator, too.

KeepEverythingExtractor

Dummy extractor which marks everything as content. Should return the input text. Use this to double-check that your problem is within a particular Extractor or somewhere else.

NumWordsRulesExtractor

A quite generic full-text extractor solely based upon the number of words per block (the current, the previous and the next block).

Notes

Getting Content from URLs

While BoilerPy3 provides extractor.*_from_url() methods as a convenience, these are intended for testing only. For more robust functionality, in addition to full control over the request itself, it is strongly recommended to use the Requests package instead, calling extractor.get_content() with the resulting HTML.

import requests
from boilerpy3 import extractors

extractor = extractors.ArticleExtractor()

# Make request to URL
resp = requests.get('http://example.com/')

# Pass HTML to Extractor
content = extractor.get_content(resp.text)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

boilerpy3-1.0.6.tar.gz (22.5 kB view details)

Uploaded Source

Built Distribution

boilerpy3-1.0.6-py3-none-any.whl (23.0 kB view details)

Uploaded Python 3

File details

Details for the file boilerpy3-1.0.6.tar.gz.

File metadata

  • Download URL: boilerpy3-1.0.6.tar.gz
  • Upload date:
  • Size: 22.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/32.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.8 tqdm/4.62.3 importlib-metadata/4.11.1 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.8.12

File hashes

Hashes for boilerpy3-1.0.6.tar.gz
Algorithm Hash digest
SHA256 c864c0041c3197ad8c118109b252c2c564133614e93068bf226eaf6d217304bb
MD5 30cc4b2a2ac257960740ee99c66d01f8
BLAKE2b-256 2c347b7433f4b07c27d28b126d6d6876b442500df1caa32e5c191fe05d1c2f44

See more details on using hashes here.

File details

Details for the file boilerpy3-1.0.6-py3-none-any.whl.

File metadata

  • Download URL: boilerpy3-1.0.6-py3-none-any.whl
  • Upload date:
  • Size: 23.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/32.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.8 tqdm/4.62.3 importlib-metadata/4.11.1 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.8.12

File hashes

Hashes for boilerpy3-1.0.6-py3-none-any.whl
Algorithm Hash digest
SHA256 0203dde43291cad959d4f47663bc47c34344c1377f4914a8291841e8721234a5
MD5 7fa100bb416ff9a58592848d12532d84
BLAKE2b-256 13dc892e92511f9fa311d6c0cd85ad633f61c85c7ebd74e80ccb2741e2a04cd8

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page