No project description provided
Project description
pAsynCrawler
Installation
pip install pAsynCrawler
Features
- Fetch data -
Asynchronously
- Parse data - with
multiprocessing
Demo
Example
from bs4 import BeautifulSoup
from pAsynCrawler import AsynCrawler, flattener
def parser_0(response_text):
soup = BeautifulSoup(response_text)
menus = soup.select('ul > li > span > a')
datas = tuple(x.text for x in menus)
urls = tuple(x.attrs['href'] for x in menus)
return (datas, urls)
def parser_0(response_text):
soup = BeautifulSoup(response_text)
menus = soup.select('ul > li > a')
datas = tuple(x.text for x in menus)
urls = tuple(x.attrs['href'] for x in menus)
return (datas, urls)
if __name__ == '__main__':
ac = AsynCrawler(asy_fetch=20, mp_parse=8)
datas_1, urls_1 = ac.fetch_and_parse(parser_0, ['https://www.example.com'])
datas_2, urls_2 = ac.fetch_and_parse(parser_1, flattener(urls_1))
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
pAsynCrawler-0.1.9.tar.gz
(7.2 kB
view hashes)
Built Distribution
Close
Hashes for pasyncrawler-0.1.9-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | f9361faa625e3f4bf361cfe4493a2c576633b387713ec6664978e48abd79dcfd |
|
MD5 | b3f3d3ad1065f6b7ff30527ada96b306 |
|
BLAKE2b-256 | d3e8dc4f75e5dab547e66ebc96e4d7b9239f4e8f9b397046673b8e118f842090 |