A Python class that acts as wrapper for ProxyCrawl scraping and crawling API
Project description
ProxyCrawl API Python class
A lightweight, dependency free Python class that acts as wrapper for ProxyCrawl API.
Installing
Choose a way of installing:
- Download the python class from Github.
- Or use PyPi Python package manager.
pip install proxycrawl
Then import the ProxyCrawlAPI
Python2:
from proxycrawl import ProxyCrawlAPI
Python3:
from proxycrawl.proxycrawl_api import ProxyCrawlAPI
Class usage
First initialize the ProxyCrawlAPI class
api = ProxyCrawlAPI({ 'token': 'YOUR_PROXYCRAWL_TOKEN' })
GET requests
Pass the url that you want to scrape plus any options from the ones available in the API documentation.
api.get(url, options = {})
Example:
response = api.get('https://www.facebook.com/britneyspears')
if response['status_code'] == 200:
print(response['body'])
You can pass any options from ProxyCrawl API.
Example:
response = api.get('https://www.reddit.com/r/pics/comments/5bx4bx/thanks_obama/', {
'user_agent': 'Mozilla/5.0 (Windows NT 6.2; rv:20.0) Gecko/20121202 Firefox/30.0',
'format': 'json'
})
if response['status_code'] == 200:
print(response['body'])
POST requests
Pass the url that you want to scrape, the data that you want to send which can be either a json or a string, plus any options from the ones available in the API documentation.
api.post(url, dictionary or string data, options = {})
Example:
response = api.post('https://producthunt.com/search', { 'text': 'example search' })
if response['status_code'] == 200:
print(response['body'])
You can send the data as application/json
instead of x-www-form-urlencoded
by setting option post_content_type
as json.
import json
response = api.post('https://httpbin.org/post', json.dumps({ 'some_json': 'with some value' }), { 'post_content_type': 'json' })
if response['status_code'] == 200:
print(response['body'])
Javascript requests
If you need to scrape any website built with Javascript like React, Angular, Vue, etc. You just need to pass your javascript token and use the same calls. Note that only .get
is available for javascript and not .post
.
api = ProxyCrawlAPI({ 'token': 'YOUR_JAVASCRIPT_TOKEN' })
response = api.get('https://www.nfl.com')
if response['status_code'] == 200:
print(response['body'])
Same way you can pass javascript additional options.
response = api.get('https://www.freelancer.com', { 'page_wait': 5000 })
if response['status_code'] == 200:
print(response['body'])
Original status
You can always get the original status and proxycrawl status from the response. Read the ProxyCrawl documentation to learn more about those status.
response = api.get('https://craiglist.com')
print(response['headers']['original_status'])
print(response['headers']['pc_status'])
If you have questions or need help using the library, please open an issue or contact us.
Custom timeout
If you need to use a custom timeout, you can pass it to the class instance creation like the following:
api = ProxyCrawlAPI({ 'token': 'TOKEN', 'timeout': 120 })
Timeout is in seconds.
Copyright 2020 ProxyCrawl
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distributions
Hashes for proxycrawl-2.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | f24b15337d3f6081dfe00a228a2f26eda8db5b647db6a0a4077881883451b470 |
|
MD5 | 8863cf99e8a02256f1667b1f3239aeb2 |
|
BLAKE2b-256 | 4d852f3fa1c4ba3e8f3ce853ef33827dc7325405e714b55000136f4b3155f7be |
Hashes for proxycrawl-2.1.0-py2-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | ac5ea70886027f44b6c63984fa8afb2f404613e0637558b0b4440be30d0a69f3 |
|
MD5 | f99c6fa7d273446bf4b36c51089e1e62 |
|
BLAKE2b-256 | 765948c9c4e82199aeb88d0483918213582df5b3c8f8c047ca3c43e53ec45b56 |