Skip to main content

Downloads web pages, scrapes main text and comments while preserving some structure, and converts to TXT, CSV, JSON and XML

Project description

Python package Python versions Documentation Status Travis build status Code Coverage Downloads

Demo as GIF image


Trafilatura is a Python package and command-line tool which seamlessly downloads, parses, and scrapes web page data: it can extract metadata, main body text and comments while preserving parts of the text formatting and page structure. The output can be converted to different formats.

Distinguishing between a whole page and the page’s essential parts can help to alleviate many quality problems related to web text processing, by dealing with the noise caused by recurring elements (headers and footers, ads, links/blogroll, etc.).

The extractor aims to be precise enough in order not to miss texts or to discard valid documents. In addition, it must be robust, but also reasonably fast. With these objectives in mind, Trafilatura is designed to run in production on millions of web documents.


  • Seamless online (including page retrieval) or parallelized offline processing using URLs, HTML files or parsed HTML trees as input
  • Several output formats supported:
    • Plain text (minimal formatting)
    • CSV (with metadata, tab-separated values)
    • JSON (with metadata)
    • XML (for metadata and structure)
    • TEI-XML
  • Robust extraction algorithm, using and readability and jusText as fallback; reasonably efficient with lxml:
    • Focuses on the document’s main text and/or comments
    • Structural elements preserved: paragraphs, titles, lists, quotes, code, line breaks, in-line text formatting (experimental)
    • Extraction of metadata (title, author, date, site name, categories and tags)
  • URL lists:
    • Generation of link lists from ATOM/RSS feeds
    • Efficient processing of URL queues
    • Blacklists or already processed URLs
  • Optional language detection on extracted content

Evaluation and alternatives

For more detailed results see the evaluation page and evaluation script. To reproduce the tests just clone the repository, install all necessary packages and run the evaluation script with the data provided in the tests directory.

400 documents, 1186 text and 1198 boilerplate segments (2020-07-16)
Python Package Precision Recall Accuracy F-Score Diff.
newspaper3k 0.2.8 0.916 0.577 0.763 0.708 11.8x
justext 2.2.0 (tweaked) 0.867 0.651 0.777 0.744 4.9x
goose3 3.1.6 0.953 0.635 0.803 0.762 17.3x
baseline (text markup) 0.738 0.804 0.760 0.770 1x
boilerpy3 1.0.2 (article mode) 0.847 0.711 0.792 0.773 4.4x
dragnet 2.0.4 0.906 0.704 0.816 0.792 2.8x
readability-lxml 0.8.1 0.913 0.739 0.835 0.817 5.4x
news-please 1.4.25 0.918 0.739 0.837 0.819 56.4x
trafilatura 0.5.1 0.927 0.854 0.894 0.889 3.1x
trafilatura 0.5.1 (+ fallbacks) 0.933 0.885 0.911 0.908 6.8x

External evaluations:


Primary method is with Python package manager: pip install --upgrade trafilatura.

For more details please read the installation documentation.


With Python or on the command-line.

In a nutshell, with Python:

>>> import trafilatura
>>> downloaded = trafilatura.fetch_url('')
>>> trafilatura.extract(downloaded)
# outputs main content and comments as plain text ...

On the command-line:

$ trafilatura -u ""
# outputs main content and comments as plain text ...

For more information please refer to the usage documentation.


trafilatura is distributed under the GNU General Public License v3.0. If you wish to redistribute this library but feel bounded by the license conditions please try interacting at arms length, multi-licensing with compatible licenses, or contacting me.

See also GPL and free software licensing: What’s in it for business?

Going further

Online documentation:

Trafilatura: Italian word for wire drawing.


  • [X] Language detection on the extracted content
  • [-] Duplicate detection at sentence, paragraph and document level using a least recently used (LRU) cache
  • [-] URL lists and document management
  • [ ] Sitemaps processing
  • [ ] Interaction with web archives (notably WARC format)
  • [ ] Configuration and extraction parameters
  • [ ] Integration of natural language processing tools


Contributions are welcome!

Feel free to file issues on the dedicated page. Thanks to the contributors who submitted features and bugfixes!


This effort is part of methods to derive information from web documents in order to build text databases for research (chiefly linguistic analysis and natural language processing). Extracting and pre-processing web texts to the exacting standards of scientific research presents a substantial challenge for those who conduct such research. Web corpus construction involves numerous design decisions, and this software package can help facilitate text data collection and enhance corpus quality.

You can contact me via my contact page or GitHub.

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for trafilatura, version 0.5.2
Filename, size File type Python version Upload date Hashes
Filename, size trafilatura-0.5.2-py3-none-any.whl (152.3 kB) File type Wheel Python version py3 Upload date Hashes View
Filename, size trafilatura-0.5.2.tar.gz (2.6 MB) File type Source Python version None Upload date Hashes View

Supported by

Pingdom Pingdom Monitoring Google Google Object Storage and Download Analytics Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page