Skip to main content

news-fetch is an open source easy-to-use news extractor and basic nlp (cleaning_text, keywords, summary) comes handy that just works

Project description

PyPI version License Documentation Status

news-fetch

news-fetch is an open-source, easy-to-use news crawler that extracts structured information from almost any news website. It can follow recursively internal hyperlinks and read RSS feeds to fetch both most recent and also old, archived articles. You only need to provide the root URL of the news website to crawl it completely. News-fetch combines the power of multiple state-of-the-art libraries and tools, such as news-please - Felix Hamborg and Newspaper3K - Lucas (欧阳象) Ou-Yang. This package consists of both features provided by Felix's work and Lucas' work.

I built this to reduce most of NaN or '' or [] or 'None' values while scraping for some news websites. Platform-independent and written in Python 3. Programmers and developers can very easily use this package to access the news data to their programs.

Source Link
PyPI: https://pypi.org/project/news-fetch/
Repository: https://santhoshse7en.github.io/news-fetch/
Documentation: https://santhoshse7en.github.io/news-fetch_doc/ (Not Yet Created!)

Dependencies

Extracted information

news-fetch extracts the following attributes from news articles. Also, have a look at an examplary JSON file extracted by news-please.

  • headline
  • name(s) of author(s)
  • publication date
  • publication
  • category
  • source_domain
  • article
  • summary
  • keyword
  • url
  • language

Dependencies Installation

Use the package manager pip to install following

pip install -r requirements.txt

Usage

Download it by clicking the green download button here on Github. To extract URLs from a targeted website, call the google_search function. You only need to parse the keyword and newspaper link argument.

>>> from newsfetch.google import google_search
>>> google = google_search('Alcoholics Anonymous', 'https://timesofindia.indiatimes.com/')

Use the URLs attribute to get the links of all the news articles scraped.

>>> google.urls

Directory of google search results urls

google

To scrape all the news details, call the newspaper function

>>> from newsfetch.news import newspaper
>>> news = newspaper('https://www.bbc.co.uk/news/world-48810070')

Directory of news

newsdir

>>> news.headline

'g20 summit: trump and xi agree to restart us china trade talks'

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests as appropriate.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

news-fetch-0.2.8.tar.gz (7.0 kB view details)

Uploaded Source

File details

Details for the file news-fetch-0.2.8.tar.gz.

File metadata

  • Download URL: news-fetch-0.2.8.tar.gz
  • Upload date:
  • Size: 7.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.24.0 setuptools/50.3.0.post20201006 requests-toolbelt/0.9.1 tqdm/4.50.2 CPython/3.7.9

File hashes

Hashes for news-fetch-0.2.8.tar.gz
Algorithm Hash digest
SHA256 e3f5a05feb00204582f4fd3f268287c2ada543c6513c2e3a573e5fc552b86802
MD5 2875b1cb72f37d3538e74cea7f1461c9
BLAKE2b-256 733db96ad08b39ef2e84488fe69fa56135437c7b0f1766a002ad766ad8234111

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page