News scraping application
Project description
NewsLookout is a web scraping application for financial events. It is a scalable, fault-tolerant, modular and configurable multi-threaded python console application. It is enterprise ready and can run behind a proxy environment via automated schedulers.
The application is readily extended by adding custom modules via its ‘plugin’ architecture for additional news sources, custom data pre-processing and NLP based news text analytics (e.g. entity recognition, negative event classification, economy trends, industry trends, etc.). For more details, refer to https://github.com/sandeep-sandhu/NewsLookout
Although the application runs without any special configuration with default parameters, the parameters given in the default config file must be customized - especially the file and folder locations for data, config file, log file, PID file, etc. Most importantly, certain model related data needs to be downloaded for NLTK and spacy NLP libraries as part of installation.
For spacy, run the following command: python -m spacy download en_core_web_lg
For nltk, run the following command within the python shell: import nltk nltk.download()
You can extend its functionality to add any additional website that you need scraped by using the template file ‘template_for_plugin.py’ and customising it. Name your custom plugin file with the same name as the class object name. Place it in the plugins_contrib folder and add the plugin’s name in the configuration file. It will be picked up automatically and run on the next application run. Take a look at one of the already implemented plugins code for examples of how a plugin can be written.
There already exist a number of python libraries for web-scraping, so why should you consider this application for web scraping news? The reason is that it has been specifically built for sourcing news and has several useful features. Some of the notable ones are:
[x] Multi-threaded for scraping several news sites in parallel
[x] Rigorously tested for the specific websites enabled in the plugins, handles several quirks and formatting problems caused by inconsistent and non-standard HTML code.
[x] Reduces the network traffic and consequently webserver load by pausing between network requests. High traffic load are usually detected and blocked. The application reduces network traffic to avoid overloading the news web servers.
[x] Keeps track of failures and history of sites scraped to avoid re-visiting them again
[x] Completely configurable functionality
[x] Works with proxy servers
[x] Enterprise ready functionality - configurable event logging, segregation of data store, etc.
[x] Runnable without a frontend, as a daemon.
[x] Extensible with custom plugins that can be rapidly written with minimal additional code to support additional news sources. Writing a new plugin does not need writing low level code to handle network traffic and HTTP protocols.
[x] Rigorous text cleaning
[x] Builtin NLP support for keyword extraction and compute document similarity
[x] Text de-duplicaiton using advanced NLP models
[x] Extensible data processing plugins to customize the data processing required after web scraping
[x] Enables web-scraping news archives to get news from previous dates for establishing history for analysis
[x] Saves present state and resumes unfinished URLs if the applicaiton is shut-down midway during web scraping
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for NewsLookout-1.9.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1e3e18049314709fa38abf1447acf27e6e4c599e1b14a8ca770fb96ed85a7f27 |
|
MD5 | fac8e78b77115200de1f4f01072982a4 |
|
BLAKE2b-256 | 15592b23cf71dbc0a32fff0c6894d9bf1048acd85ff34cf7fdf64e09d0cc9419 |