Skip to main content

Official python client for the ScrapingAnt API.

Project description

ScrapingAnt API client for Python

PyPI version

scrapingant-client is the official library to access ScrapingAnt API from your Python applications. It provides useful features like parameters encoding to improve the ScrapingAnt usage experience. Requires python 3.6+.

Quick Start

from scrapingant_client import ScrapingAntClient

client = ScrapingAntClient(token='<YOUR-SCRAPINGANT-API-TOKEN>')
# Scrape the example.com site
result = client.general_request('https://example.com')
print(result.content)

Install

pip install scrapingant-client

If you need async support:

pip install scrapingant-client[async]

API token

In order to get API token you'll need to register at ScrapingAnt Service

API Reference

All public classes, methods and their parameters can be inspected in this API reference.

ScrapingAntClient(token)

Main class of this library.

Param Type
token string

ScrapingAntClient.general_request and ScrapingAntClient.general_request_async

https://docs.scrapingant.com/request-response-format#available-parameters

Param Type Default
url string
method string GET
cookies List[Cookie] None
headers List[Dict[str, str]] None
js_snippet string None
proxy_type ProxyType datacenter
proxy_country str None
wait_for_selector str None
browser boolean True
return_page_source boolean False
data same as requests param 'data' None
json same as requests param 'json' None

IMPORTANT NOTE: js_snippet will be encoded to Base64 automatically by the ScrapingAnt client library.


Cookie

Class defining cookie. Currently it supports only name and value

Param Type
name string
value string

Response

Class defining response from API.

Param Type
content string
cookies List[Cookie]
status_code int
text string

Exceptions

ScrapingantClientException is base Exception class, used for all errors.

Exception Reason
ScrapingantInvalidTokenException The API token is wrong or you have exceeded the API calls request limit
ScrapingantInvalidInputException Invalid value provided. Please, look into error message for more info
ScrapingantInternalException Something went wrong with the server side code. Try again later or contact ScrapingAnt support
ScrapingantSiteNotReachableException The requested URL is not reachable. Please, check it locally
ScrapingantDetectedException The anti-bot detection system has detected the request. Please, retry or change the request settings.
ScrapingantTimeoutException Got timeout while communicating with Scrapingant servers. Check your network connection. Please try later or contact support

Examples

Sending custom cookies

from scrapingant_client import ScrapingAntClient
from scrapingant_client import Cookie

client = ScrapingAntClient(token='<YOUR-SCRAPINGANT-API-TOKEN>')

result = client.general_request(
    'https://httpbin.org/cookies',
    cookies=[
        Cookie(name='cookieName1', value='cookieVal1'),
        Cookie(name='cookieName2', value='cookieVal2'),
    ]
)
print(result.content)
# Response cookies is a list of Cookie objects
# They can be used in next requests
response_cookies = result.cookies 

Executing custom JS snippet

from scrapingant_client import ScrapingAntClient

client = ScrapingAntClient(token='<YOUR-SCRAPINGANT-API-TOKEN>')

customJsSnippet = """
var str = 'Hello, world!';
var htmlElement = document.getElementsByTagName('html')[0];
htmlElement.innerHTML = str;
"""
result = client.general_request(
    'https://example.com',
    js_snippet=customJsSnippet,
)
print(result.content)

Exception handling and retries

from scrapingant_client import ScrapingAntClient, ScrapingantClientException, ScrapingantInvalidInputException

client = ScrapingAntClient(token='<YOUR-SCRAPINGANT-API-TOKEN>')

RETRIES_COUNT = 3


def parse_html(html: str):
    ...  # Implement your data extraction here


parsed_data = None
for retry_number in range(RETRIES_COUNT):
    try:
        scrapingant_response = client.general_request(
            'https://example.com',
        )
    except ScrapingantInvalidInputException as e:
        print(f'Got invalid input exception: {{repr(e)}}')
        break  # We are not retrying if request params are not valid
    except ScrapingantClientException as e:
        print(f'Got ScrapingAnt exception {repr(e)}')
    except Exception as e:
        print(f'Got unexpected exception {repr(e)}')  # please report this kind of exceptions by creating a new issue
    else:
        try:
            parsed_data = parse_html(scrapingant_response.content)
            break  # Data is parsed successfully, so we dont need to retry
        except Exception as e:
            print(f'Got exception while parsing data {repr(e)}')

if parsed_data is None:
    print(f'Failed to retrieve and parse data after {RETRIES_COUNT} tries')
    # Can sleep and retry later, or stop the script execution, and research the reason 
else:
    print(f'Successfully parsed data: {parsed_data}')

Sending custom headers

from scrapingant_client import ScrapingAntClient

client = ScrapingAntClient(token='<YOUR-SCRAPINGANT-API-TOKEN>')

result = client.general_request(
    'https://httpbin.org/headers',
    headers={
        'test-header': 'test-value'
    }
)
print(result.content)

# Http basic auth example
result = client.general_request(
    'https://jigsaw.w3.org/HTTP/Basic/',
    headers={'Authorization': 'Basic Z3Vlc3Q6Z3Vlc3Q='}
)
print(result.content)

Simple async example

import asyncio

from scrapingant_client import ScrapingAntClient

client = ScrapingAntClient(token='<YOUR-SCRAPINGANT-API-TOKEN>')


async def main():
    # Scrape the example.com site
    result = await client.general_request_async('https://example.com')
    print(result.content)


asyncio.run(main())

Sending POST request

from scrapingant_client import ScrapingAntClient

client = ScrapingAntClient(token='<YOUR-SCRAPINGANT-API-TOKEN>')

# Sending POST request with json data
result = client.general_request(
    url="https://httpbin.org/post",
    method="POST",
    json={"test": "test"},
)
print(result.content)

# Sending POST request with bytes data
result = client.general_request(
    url="https://httpbin.org/post",
    method="POST",
    data=b'test_bytes',
)
print(result.content)

Useful links

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scrapingant-client-2.0.1.tar.gz (8.3 kB view details)

Uploaded Source

Built Distribution

scrapingant_client-2.0.1-py3-none-any.whl (11.4 kB view details)

Uploaded Python 3

File details

Details for the file scrapingant-client-2.0.1.tar.gz.

File metadata

  • Download URL: scrapingant-client-2.0.1.tar.gz
  • Upload date:
  • Size: 8.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.17

File hashes

Hashes for scrapingant-client-2.0.1.tar.gz
Algorithm Hash digest
SHA256 7c22bc25323cf49ec11e4f9cf447f584ddb4b98e4d4917b29d5929c04157598f
MD5 d0ff493ceee22b716561706a057ab6b1
BLAKE2b-256 308129d721bb87f88cc0aa09df2a9fb8a0677b1ae74be7f5ee1634eb80736b41

See more details on using hashes here.

File details

Details for the file scrapingant_client-2.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for scrapingant_client-2.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 4a8a2f3f9e2751ddda2fd44b2d71f8e84f2575b71cd86095ebd4d5d16b296f25
MD5 a021173a6f4acf8835a52d7f7842d074
BLAKE2b-256 6704c782b1665ed7a447db83c8efb973f3d3aef9c9eeb7a5e5064e9e5d2e5553

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page