Scrapes the main text of web pages while preserving some structure.
Project description
- Code:
- Documentation:
see README file
- Issue tracker:
Robust extraction of main text content and boilerplate removal based on a combination of DOM-based examination, XPath expressions and rules. Given a HTML document, this library parses it, retrieves the main body text and converts it to XML or plain text, while preserving part of the text formatting and page structure.
In a nutshell, with Python:
>>> import requests, trafilatura
>>> response = requests.get('https://www.iana.org/about')
>>> trafilatura.process_record(response.text)
>>> # outputs main content in plain text format ...
On the command-line:
$ trafilatura -u https://www.sueddeutsche.de/politik/usa-pompeo-maas-merkel-iran-nordstream-1.4434358
$ # outputs main content in plain text format ...
Description
Scrapes the main text of web pages while preserving some structure. Distinguishing between the whole page and the main text content can help alleviating many quality problems related to web texts.
The purpose is to find relevant sections of a web page, which is usually the part displayed centrally, without the left or right bars, the header or the footer, but including potential titles and comments. In addition, the extraction focuses on original text and can help with the noise consisting of recurring elements (headers and footers, ads, links/blogroll, etc.)
Also known as web scraping, boilerplate removal or boilerplate detection, DOM-based content extraction, main content identification, web page template detection, web page cleaning, web content extraction, or HTML text cleaning.
Features
Because it relies on lxml, trafilatura is comparatively fast. It is also robust, as the additional generic jusText algorithm is used as a backup solution.
The result of processing can be in plain text or XML format. In the latter case, basic formatting elements are preserved such as text formatting (bold, italic, etc.) and page structure (paragraphs, titles, lists), which can be used for further processing.
Work in progress, currently experimental features:
Separate extraction of main text and comments
Duplicate detection at paragraph level using a least recently used (LRU) cache
Language detection on the extracted content
XML output compatible with the recommendations of the Text Encoding Initiative (XML TEI)
Installation
trafilatura is a Python package (compatible with Python 3.5 upwards) that is tested on Linux and macOS, is available on PyPI and can be installed using pip:
Install from package repository: pip install trafilatura
(Or use ``pip3 install trafilatura`` on systems where Python 2 and 3 are both globally installed and pip refers to Python 2.)
For all experimental functionality please use pip install trafilatura[all] Most notably: language detection and faster processing of downloads. The cchardet package is currently not working on some macOS versions.
Direct installation of the latest version (see build status):
pip install git+https://github.com/adbar/trafilatura.git
(For dependency management see this thread)
With Python
Basic use
The simplest way to use trafilatura is as follows:
>>> import requests, trafilatura
>>> response = requests.get('https://www.iana.org/about')
>>> result = trafilatura.process_record(response.text)
>>> print(result) # newlines preserved, TXT output
>>> result = trafilatura.process_record(response.text, xml_output=True)
>>> print(result) # some formatting preserved in basic XML structure
The only required argument is the response element, the rest is optional. It is also possible to use a previously parsed tree (i.e. a lxml.html object) as input, which is then handled seamlessly.
>>> from lxml import html
>>> mytree = html.fromstring('<html><body><article><p>Here is the main text. It has to be long enough in order to bypass the safety checks. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.</p></article></body></html>')
>>> trafilatura.process_record(mytree)
'Here is the main text. It has to be long enough in order to bypass the safety checks. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.\n'
Experimental feature: the target language can also be set using 2-letter codes (ISO 639-1), there will be no output if the detected language of the result does not match and no such filtering if the identification component has not been installed (see above for installation instructions).
>>> result = trafilatura.process_record(response.text, url, target_language='de')
For further configuration see the variables in settings.py.
On the command-line
A command-line interface is included, URLs can be used directly (-u/--URL):
$ trafilatura -u https://www.sueddeutsche.de/politik/usa-pompeo-maas-merkel-iran-nordstream-1.4434358
$ # outputs main content in plain text format ...
$ trafilatura --xml --URL "https://de.creativecommons.org/index.php/was-ist-cc/"
$ # outputs main text with basic XML structure ...
You can also pipe a HTML document (and response body) to the trafilatura:
$ wget -qO- "https://de.creativecommons.org/index.php/was-ist-cc/" | trafilatura
For usage instructions see trafilatura -h:
usage: trafilatura [-h] [-f] [--nocomments] [--notables] [--xml] [--xmltei] [-u URL] [-v]
- optional arguments:
- -h, --help
show this help message and exit
- -f, --fast
Fast (without fallback detection)
- --nocomments
Don’t output any comments
- --notables
Don’t output any table elements
- --xml
XML output
- --xmltei
XML TEI output
- -u URL, --URL URL
custom URL download
- -v, --verbose
increase output verbosity
Additional information
Context
This module is part of methods to derive information from web documents in order to build text databases for research (chiefly linguistic analysis and natural language processing). A significant challenge resides in the ability to extract and pre-process web texts to meet scientific expectations. For more information:
Barbaresi, Adrien. “The Vast and the Focused: On the need for domain-focused web corpora”, Proceedings of the 7th Workshop on Challenges in the Management of Large Corpora (CMLC-7), 2019.
Barbaresi, Adrien. “Efficient construction of metadata-enhanced web corpora”, Proceedings of the 10th Web as Corpus Workshop (WAC-X), 2016.
Name
Trafilatura: Italian word for wire drawing.
Kudos to…
Alternatives
Most corresponding Python packages are not actively maintained, the following alternatives exist:
dragnet features combined and machine-learning approaches, but requires many dependencies as well as extensive tuning
python-readability cleans the page and preserves some markup but is mostly geared towards news texts
goose can extract information for embedded content but doesn’t preserve markup and is not maintained
html2text converts HTML pages to Markup language and thus keeps the structure, though it doesn’t focus on main text extraction
Contact
Pull requests are welcome.
See my contact page for additional details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file trafilatura-0.1.1.tar.gz
.
File metadata
- Download URL: trafilatura-0.1.1.tar.gz
- Upload date:
- Size: 1.5 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.12.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.2.0 requests-toolbelt/0.9.1 tqdm/4.32.2 CPython/3.6.8
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e0400e2bdbb23018987e9638a91bd0edc64798f5205be3a4f92f29f1557937cb |
|
MD5 | aa06f1f175b1eafdcd9652fc91c4b63f |
|
BLAKE2b-256 | 98e704640914aacf912bd8c0a6ad3013ba1783e79f8b0376923a05981f6a564b |
File details
Details for the file trafilatura-0.1.1-py3-none-any.whl
.
File metadata
- Download URL: trafilatura-0.1.1-py3-none-any.whl
- Upload date:
- Size: 24.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.12.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.2.0 requests-toolbelt/0.9.1 tqdm/4.32.2 CPython/3.6.8
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 15185f43bbd0d10b20c00e97762f87ffe08fa2c7330718a20bbeffd6a4ab169a |
|
MD5 | 3fe758cbf1cc58b10457d6a151c705c8 |
|
BLAKE2b-256 | 3c67177a490d9224ecc2fdeeea7bf96be0785c2cef7fa9c993b5d6bef7316d5f |