Skip to main content

The all-in-one Python package for seamless newspaper article indexing, scraping, and processing – supports public and premium content!

Project description

Newspaper-Scraper

The all-in-one Python package for seamless newspaper article indexing, scraping, and processing – supports public and premium content!

Intro

While tools like newspaper3k and goose3 can be used for extracting articles from news websites, they need a dedicated article url for older articles and do not support paywall content. This package aims to solve these issues by providing a unified interface for indexing, extracting and processing articles from newspapers.

  1. Indexing: Index articles from a newspaper website using the beautifulsoup package for public articles and selenium for paywall content.
  2. Extraction: Extract article content using the goose3 package.
  3. Processing: Process articles for nlp features using the spaCy package.

The indexing functionality is based on a dedicated file for each newspaper. A few newspapers are already supported, but it is easy to add new ones.

Supported Newspapers

Logo Newspaper Country Time span Number of articles
Der Spiegel Germany Since 2000 tbd
Die Welt Germany Since 2000 tbd
Bild Germany Since 2006 tbd
Die Zeit Germany Since 1946 tbd
Handelsblatt Germany Since 2003 tbd

Setup

It is recommended to install the package in an dedicated Python environment.
To install the package via pip, run the following command:

pip install newspaper-scraper

To also include the nlp extraction functionality (via spaCy), run the following command:

pip install newspaper-scraper[nlp]

Usage

To index, extract and process all public and premium articles from Der Spiegel, published in August 2021, run the following code:

import newspaper_scraper as ns  
from credentials import username, password  
  
with ns.Spiegel(db_file='articles.db') as news:
news.index_articles_by_date_range('2021-08-01', '2021-08-31')  
news.scrape_public_articles()
news.scrape_premium_articles(username=username, password=password)  
news.nlp()

This will create a sqlite database file called articles.db in the current working directory. The database contains the following tables:

  • tblArticlesIndexed: Contains all indexed articles with their scraping/ processing status and whether they are public or premium content.
  • tblArticlesScraped: Contains metadata for all parsed articles, provided by goose3.
  • tblArticlesProcessed: Contains nlp features of the cleaned article text, provided by spaCy.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

newspaper_scraper-0.2.0.tar.gz (19.4 kB view details)

Uploaded Source

Built Distribution

newspaper_scraper-0.2.0-py3-none-any.whl (28.2 kB view details)

Uploaded Python 3

File details

Details for the file newspaper_scraper-0.2.0.tar.gz.

File metadata

  • Download URL: newspaper_scraper-0.2.0.tar.gz
  • Upload date:
  • Size: 19.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.2

File hashes

Hashes for newspaper_scraper-0.2.0.tar.gz
Algorithm Hash digest
SHA256 d8d409b847f0243055cd643e08c45740a151843d73119942ed922e36a8001a4f
MD5 ea0a14395d3c2b8b3d910cd344546886
BLAKE2b-256 15a0e6dc8135daa2d773ccd6857d9e367da17fe3280147906e2c95ed80ba148a

See more details on using hashes here.

File details

Details for the file newspaper_scraper-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for newspaper_scraper-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 9976161b0aa6246be275bb7f0a3dad91acc3d4081b0c99b24a7db57a3976ffd5
MD5 36dade2d9af6512d3521e8eb4ac5d939
BLAKE2b-256 6555f045f7e4aabe361b338de042ca0c6ed3006fba5085dd9b626c7fa7af2b5c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page