Skip to main content

Downloads web pages, scrapes main text and comments while preserving some structure, and converts to TXT, CSV, JSON and XML

Project description

Python package Python versions Documentation Status Travis build status Code Coverage Downloads

Demo as GIF image

Description

Trafilatura is a Python package and command-line tool which seamlessly downloads, parses, and scrapes web page data: it can extract metadata, main body text and comments while preserving parts of the text formatting and page structure. The output can be converted to different formats.

Distinguishing between a whole page and the page’s essential parts can help to alleviate many quality problems related to web text processing, by dealing with the noise caused by recurring elements (headers and footers, ads, links/blogroll, etc.).

The extractor aims to be precise enough in order not to miss texts or to discard valid documents. In addition, it must be robust, but also reasonably fast. With these objectives in mind, Trafilatura is designed to run in production on millions of web documents. It is based on lxml as well as readability and jusText as fallback.

Features

  • Seamless parallelized online and offline processing:
    • Download and conversion utilities included

    • URLs, HTML files or parsed HTML trees as input

  • Robust and efficient extraction:
    • Main text and/or comments

    • Structural elements preserved: paragraphs, titles, lists, quotes, code, line breaks, in-line text formatting

    • Extraction of metadata (title, author, date, site name, categories and tags)

  • Several output formats supported:
    • Plain text (minimal formatting)

    • CSV (with metadata, tab-separated values)

    • JSON (with metadata)

    • XML (for metadata and structure) and TEI-XML

  • Link discovery and URL lists:
    • Support for sitemaps and ATOM/RSS feeds

    • Efficient and polite processing of URL queues

    • Blacklisting

  • Optional language detection on extracted content

Evaluation and alternatives

For more detailed results see the evaluation page and evaluation script. To reproduce the tests just clone the repository, install all necessary packages and run the evaluation script with the data provided in the tests directory.

500 documents, 1487 text and 1496 boilerplate segments (2020-11-06)

Python Package

Precision

Recall

Accuracy

F-Score

Diff.

justext 2.2.0 (tweaked)

0.870

0.584

0.749

0.699

6.1x

newspaper3k 0.2.8

0.921

0.574

0.763

0.708

12.9x

goose3 3.1.6

0.950

0.629

0.799

0.757

19.0x

boilerpy3 1.0.2 (article mode)

0.851

0.696

0.788

0.766

4.8x

baseline (text markup)

0.746

0.804

0.766

0.774

1x

dragnet 2.0.4

0.906

0.689

0.810

0.783

3.1x

readability-lxml 0.8.1

0.917

0.716

0.826

0.804

5.9x

news-please 1.5.13

0.923

0.711

0.827

0.804

184x

trafilatura 0.6.0

0.924

0.849

0.890

0.885

3.9x

trafilatura 0.6.0 (+ fallbacks)

0.933

0.877

0.907

0.904

8.4x

External evaluations:

Usage and documentation

For further information please refer to the documentation.

License

trafilatura is distributed under the GNU General Public License v3.0. If you wish to redistribute this library but feel bounded by the license conditions please try interacting at arms length, multi-licensing with compatible licenses, or contacting me.

See also GPL and free software licensing: What’s in it for business?

Roadmap

  • [-] Duplicate detection at sentence, paragraph and document level using a least recently used (LRU) cache

  • [-] URL lists and document management

  • [-] Configuration and extraction parameters

  • [ ] Interaction with web archives (notably WARC format)

  • [ ] Integration of natural language processing tools

Contributing

Contributions are welcome!

Feel free to file issues on the dedicated page. Thanks to the contributors who submitted features and bugfixes!

Author

This effort is part of methods to derive information from web documents in order to build text databases for research (chiefly linguistic analysis and natural language processing). Extracting and pre-processing web texts to the exacting standards of scientific research presents a substantial challenge for those who conduct such research. Web corpus construction involves numerous design decisions, and this software package can help facilitate text data collection and enhance corpus quality.

https://zenodo.org/badge/DOI/10.5281/zenodo.3460969.svg

You can contact me via my contact page or GitHub.

Going further

Online documentation: trafilatura.readthedocs.io

Trafilatura: Italian word for wire drawing.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

trafilatura-0.7.0.tar.gz (2.6 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

trafilatura-0.7.0-py3-none-any.whl (160.1 kB view details)

Uploaded Python 3

File details

Details for the file trafilatura-0.7.0.tar.gz.

File metadata

  • Download URL: trafilatura-0.7.0.tar.gz
  • Upload date:
  • Size: 2.6 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.5.0.1 requests/2.25.0 setuptools/46.0.0 requests-toolbelt/0.9.1 tqdm/4.45.0 CPython/3.6.9

File hashes

Hashes for trafilatura-0.7.0.tar.gz
Algorithm Hash digest
SHA256 c8ff3539c54f683e51994a3ca2e92a5858ff13feeec156dad2e1ea31602a43e0
MD5 99a647dd63a2f7c929e2426c94a4ace1
BLAKE2b-256 493ae84f025bfebc6a3f0db608794b86d56d0bb5ad437aa47f195a0657b8f9d2

See more details on using hashes here.

File details

Details for the file trafilatura-0.7.0-py3-none-any.whl.

File metadata

  • Download URL: trafilatura-0.7.0-py3-none-any.whl
  • Upload date:
  • Size: 160.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.5.0.1 requests/2.25.0 setuptools/46.0.0 requests-toolbelt/0.9.1 tqdm/4.45.0 CPython/3.6.9

File hashes

Hashes for trafilatura-0.7.0-py3-none-any.whl
Algorithm Hash digest
SHA256 c6ab6fe85449796da5c35e174a83a23a06319bccd60d97db19cca09b0a8b6674
MD5 8f30fa6453d3c2a8a804ec344d2ab1d9
BLAKE2b-256 d2c4140473ee1432b26493a5b5f02cd0063d430a0c7412623078e7f2ca649a63

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page