No project description provided
Project description
pAsynCrawler
Installation
pip install pAsynCrawler
Features
- Fetch data -
Asynchronously
- Parse data - with
multiprocessing
Example
from bs4 import BeautifulSoup
from pAsynCrawler import AsynCrawler, flattener
def parser_0(response_text):
soup = BeautifulSoup(response_text)
menus = soup.select('ul > li > span > a')
datas = tuple(x.text for x in menus)
urls = tuple(x.attrs['href'] for x in menus)
return (datas, urls)
def parser_0(response_text):
soup = BeautifulSoup(response_text)
menus = soup.select('ul > li > a')
datas = tuple(x.text for x in menus)
urls = tuple(x.attrs['href'] for x in menus)
return (datas, urls)
if __name__ == '__main__':
ac = AsynCrawler(asy_fetch=20, mp_parse=8)
datas_1, urls_1 = ac.fetch_and_parse(parser_0, ['https://www.example.com'])
datas_2, urls_2 = ac.fetch_and_parse(parser_1, flattener(urls_1))
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
pAsynCrawler-0.1.11.tar.gz
(7.5 kB
view hashes)
Built Distribution
Close
Hashes for pasyncrawler-0.1.11-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5d0edf1e3686472c1b804ceea0679ebc86497630eef7010eeacd57156bcf35c0 |
|
MD5 | dae36fc8873bbccd99d796096b9c9a60 |
|
BLAKE2b-256 | db646990f3653758a9466090cc2e861863f2e9ee55d40b3d614748fc74390752 |