Easy multythread web scraper
Project description
MulTithreaded SCRAPER
Hello, welcome you here. This is the mt_scraper library documentation for python version 3.
Description
This is a project of a multithreaded site scraper. Multithreading operation speeds up data collection from Web several times (more than 10 on a normal old work laptop). To use it, you need to redefine the parse method for your needs and enjoy the benefits of multithreading (with all its implementation in Python)
Collecting data in the JSON file, which stores a list of objects (dictionaries) with the collected data.
Application
Simple application
Main Library Usage Scenario
import mt_scraper
scraper = mt_scraper.Scraper ()
scraper.run ()
As you can see there are only three lines of code
What happens when this happens
With this application, you get a data scraper from the pages of the list:
url_components_list = [
'http://example.com/',
'http://scraper.iamengineer.ru',
'http://scraper.iamengineer.ru/bad-file.php',
'http://badlink-for-scarper.ru',
]
The last two pages were added to demonstrate the two most common errors when retrieving data from the Internet, these are HTTP 404 - Not Found, and the URL Error: Name: or service not known.
The real URL is obtained by substituting this list into a template:
url_template = '{}'
Data is accumulated in the file:
out_filename = 'out.json'
The work is conducted in 5 threads and a task queue of 5 units is created (this has a value, for example, when canceling an operation from the keyboard, the queue length indicates how many tasks were sent for execution):
threads_num = 5
queue_len = 5
The following is used as a parser function:
def parse (self, num, url_component, html):
'''You must override this method.
Must return a dictionary or None if parsing the page
impossible
'''
parser = MyDummyHTMLParser ()
parser.feed (html)
obj = parser.obj
obj ['url_component'] = url_component
return parser.obj
DummyParser is a simple version of HTML parser, it is remarkable only because it uses only one standard library and does not require any additional modules. File dummy_parser.py:
from html.parser import HTMLParser
class MyDummyHTMLParser (HTMLParser):
def __init __ (self):
super () .__ init __ ()
self.a_tag = False
self.h1_tag = False
self.p_tag = False
self.obj = {}
def handle_starttag (self, tag, attrs):
if tag == 'h1':
self.h1_tag = True
elif tag == 'p':
self.p_tag = True
elif tag == 'a':
self.a_tag = True
for (attr, value,) in attrs:
if attr == 'href':
self.obj ['link'] = value
def handle_endtag (self, tag):
if tag == 'h1':
self.h1_tag = False
elif tag == 'p':
self.p_tag = False
elif tag == 'a':
self.a_tag = False
def handle_data (self, data):
if self.h1_tag:
self.obj ['header'] = data
elif self.p_tag and not self.a_tag:
self.obj ['article'] = data
This approach is used only to demonstrate the capabilities of multithreading, in real projects it is recommended to use the lxml or BS libraries, a more advanced application will be shown below in the section "Advanced Application"
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file mt_scraper-0.3.2.tar.gz
.
File metadata
- Download URL: mt_scraper-0.3.2.tar.gz
- Upload date:
- Size: 6.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.21.0 setuptools/41.0.1 requests-toolbelt/0.9.1 tqdm/4.32.2 CPython/3.6.8
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | cea6487f0ea37789bf4c8b8ebcbec09e2c29db30cd37e08f211429718470d049 |
|
MD5 | c5f28125a608c361792e73c4e3297e12 |
|
BLAKE2b-256 | 5ed32e2423754f3a4bbdf43d35a62b4acf3de6ed2912f860706e7eedc50c0539 |
File details
Details for the file mt_scraper-0.3.2-py3-none-any.whl
.
File metadata
- Download URL: mt_scraper-0.3.2-py3-none-any.whl
- Upload date:
- Size: 7.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.21.0 setuptools/41.0.1 requests-toolbelt/0.9.1 tqdm/4.32.2 CPython/3.6.8
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9fdd30759cd5361a1a161b550b6136d799648655c24e5a3d7ae75e96dcf37aae |
|
MD5 | 3a5022ea7f2c71b5abed9142251eb213 |
|
BLAKE2b-256 | e1d8fe2d75cc30d76d6989465fb0c7013ad27a7d1f11f3298ac6c1d09b91d4d8 |