A package designed to scrape webpages using aiohttp and asyncio. Has some error handling to overcome common issues such as sites blocking you after n requests over a short period.
Project description
Async-scrape
Perform webscrape asyncronously
Async-scrape is a package which uses asyncio and aiohttp to scrape websites and has useful features built in.
Features
- Breaks - pause scraping when a website blocks your requests consistently
- Rate limit - slow down scraping to prevent being blocked
Installation
Async-scrape requires C++ Build tools v15+ to run.
pip install async-scrape
How to use it
Key inpur parameters:
post_process_func
- the callable used to process the returned responsepost_process_kwargs
- and kwargs to be passed to the callableuse_proxy
- should a proxy be used (if this is true then either provide aproxy
orpac_url
variable)attempt_limit
- how manay attempts should each request be given before it is marked as failedrest_wait
- how long should the programme pause between loopscall_rate_limit
- limits the rate of requests (useful to stop getting blocked from websites)randomise_headers
- if set toTrue
a new set of headers will be generated between each request
Get requests
# Create an instance
from async_scrape import AsyncScrape
def post_process(html, resp, **kwargs):
"""Function to process the gathered response from the request"""
if resp.status == 200:
return "Request worked"
else:
return "Request failed"
async_Scrape = AsyncScrape(
post_process_func=post_process,
post_process_kwargs={},
fetch_error_handler=None,
use_proxy=False,
proxy=None,
pac_url=None,
acceptable_error_limit=100,
attempt_limit=5,
rest_between_attempts=True,
rest_wait=60,
call_rate_limit=None,
randomise_headers=True
)
urls = [
"https://www.google.com",
"https://www.bing.com",
]
resps = async_Scrape.scrape_all(urls)
Post requests
# Create an instance
from async_scrape import AsyncScrape
def post_process(html, resp, **kwargs):
"""Function to process the gathered response from the request"""
if resp.status == 200:
return "Request worked"
else:
return "Request failed"
async_Scrape = AsyncScrape(
post_process_func=post_process,
post_process_kwargs={},
fetch_error_handler=None,
use_proxy=False,
proxy=None,
pac_url=None,
acceptable_error_limit=100,
attempt_limit=5,
rest_between_attempts=True,
rest_wait=60,
call_rate_limit=None,
randomise_headers=True
)
urls = [
"https://eos1jv6curljagq.m.pipedream.net",
"https://eos1jv6curljagq.m.pipedream.net",
]
payloads = [
{"value": 0},
{"value": 1}
]
resps = async_Scrape.scrape_all(urls, payloads=payloads)
Response
Response object is a list of dicts in the format:
{
"url":url, # url of request
"req":req, # combination of url and params
"func_resp":func_resp, # response from post processing function
"status":resp.status, # http status
"error":None # any error encountered
}
License
MIT
Free Software, Hell Yeah!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
async-scrape-0.1.18.tar.gz
(13.6 kB
view hashes)
Built Distribution
Close
Hashes for async_scrape-0.1.18-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0d0e07e0566f4e005a43615914ca2ab7dc2d01deda2f2c431136fa8723b7ae85 |
|
MD5 | 4759fd0a56c457531900f615c79dccfa |
|
BLAKE2b-256 | 52be61090dc7e1a3a4670c9f7ea4e71755335b8c17f2f45d7299667ac0e1d0c8 |