Skip to main content

Downloads web pages, scrapes main text and comments while preserving some structure, and converts to TXT, CSV, XML & TEI-XML

Project description

Python package Python versions Documentation Status Travis build status Code Coverage

Demo as GIF image

Description

Trafilatura is Python package and command-line tool which seamlessly downloads, parses, and scrapes web page data: it can extract metadata, main body text and comments while preserving part of the text formatting and page structure. The output can be converted to different formats.

Distinguishing between whole page and essential parts can help to alleviate many quality problems related to web texts as it deals with the noise consisting of recurring elements (headers and footers, ads, links/blogroll).

The extractor has to be precise enough not to miss texts or discard valid documents, robust but also reasonably fast. It is designed to run in production on millions of web documents.

Features

  • Seamless online (including page retrieval) or parallelized offline processing:
    • URLs, HTML files or parsed HTML trees as input

  • Several output formats supported:
  • Robust extraction algorithm, using and readability and jusText as fallback, reasonably efficient with lxml:
    • Focus on main text and/or comments

    • Structural elements preserved: paragraphs, titles, lists, quotes, code, line breaks, in-line text formatting (experimental)

    • Extraction of metadata (title, author, date, site name, categories and tags)

  • URL lists:
    • Generation of link lists from ATOM/RSS feeds

    • Efficient processing of URL queue

    • Blacklists or already processed URLs

  • Optional language detection on the extracted content

Evaluation and alternatives

For more detailed results see the evaluation page and evaluation script. To reproduce the tests just clone the repository, install all necessary packages and run the evaluation script with the data provided in the tests directory.

300 documents, 869 text and 878 boilerplate segments (2020-03-19)

Python Package

Precision

Recall

Accuracy

F-Score

Time

baseline (text markup)

0.726

0.776

0.742

0.750

1.14

justext 2.2.0 (German stoplist)

0.849

0.529

0.719

0.652

6.37

newspaper3k 0.2.8

0.923

0.591

0.772

0.721

14.80

goose3 3.1.6

0.957

0.640

0.807

0.767

21.54

boilerpy3 1.0.2 (article mode)

0.841

0.734

0.799

0.784

5.65

dragnet 2.0.4

0.909

0.722

0.825

0.804

3.64

readability-lxml 0.7.1

0.928

0.743

0.844

0.826

6.59

news-please 1.4.25

0.926

0.747

0.844

0.827

70.81

trafilatura 0.4

0.914

0.869

0.894

0.891

4.87

trafilatura 0.4 (+ fallback)

0.925

0.904

0.916

0.914

9.94

Installation

Chiefly with Python package managers: pip install --upgrade trafilatura.

For more details please read the installation documentation.

Usage

With Python or on the command-line.

In a nutshell, with Python:

>>> import trafilatura
>>> downloaded = trafilatura.fetch_url('https://github.blog/2019-03-29-leader-spotlight-erin-spiceland/')
>>> trafilatura.extract(downloaded)
# outputs main content and comments as plain text ...

On the command-line:

$ trafilatura -u "https://github.blog/2019-03-29-leader-spotlight-erin-spiceland/"
# outputs main content and comments as plain text ...

For more information please refer to the usage documentation.

License

trafilatura is distributed under the GNU General Public License v3.0

GPL and free software licensing: What’s in it for business?

Going further

Online documentation: trafilatura.readthedocs.io

Trafilatura: Italian word for wire drawing.

Roadmap

  • [-] Language detection on the extracted content

  • [-] Duplicate detection at sentence, paragraph and document level using a least recently used (LRU) cache

  • [-] URL lists and document management

  • [ ] Configuration and extraction parameters

  • [ ] Integration of natural language processing tools

Contributing

Contributions are welcome!

Feel free to file issues on the dedicated page. Thanks to the contributors who submitted features and bugfixes!

Author

This effort is part of methods to derive information from web documents in order to build text databases for research (chiefly linguistic analysis and natural language processing). A significant challenge resides in the ability to extract and pre-process web texts to meet scientific expectations: Web corpus construction involves numerous design decisions, and this software packages can help facilitate collection and enhance corpus quality.

https://zenodo.org/badge/DOI/10.5281/zenodo.3460969.svg

You can contact me via my contact page or GitHub.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

trafilatura-0.5.0.tar.gz (2.6 MB view details)

Uploaded Source

Built Distribution

trafilatura-0.5.0-py3-none-any.whl (147.4 kB view details)

Uploaded Python 3

File details

Details for the file trafilatura-0.5.0.tar.gz.

File metadata

  • Download URL: trafilatura-0.5.0.tar.gz
  • Upload date:
  • Size: 2.6 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/45.1.0 requests-toolbelt/0.9.1 tqdm/4.41.1 CPython/3.6.9

File hashes

Hashes for trafilatura-0.5.0.tar.gz
Algorithm Hash digest
SHA256 c8fc8cebfbe7e566698c4b469f4b4eab627f6942c62aa1a4d72f33bc3df29eb9
MD5 a4fd45915995ae327050317943207316
BLAKE2b-256 9a6bd01aea9008a7a980302de1939c98fa94df24765884e76c2967d4c68d509d

See more details on using hashes here.

File details

Details for the file trafilatura-0.5.0-py3-none-any.whl.

File metadata

  • Download URL: trafilatura-0.5.0-py3-none-any.whl
  • Upload date:
  • Size: 147.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/45.1.0 requests-toolbelt/0.9.1 tqdm/4.41.1 CPython/3.6.9

File hashes

Hashes for trafilatura-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 118feeb1c8794c70d60ced27b45d7563dc6eb12ade4594ae7d4f0dd0427de3d8
MD5 93de40921e8b1ae34761e22f0b6add96
BLAKE2b-256 8a76d83a78cb2e51312f12baf7b08f5b26dad1829e1b798652f88491853cfd85

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page