Skip to main content

A Python class that acts as wrapper for Crawlbase scraping and crawling API

Project description

Crawlbase API Python class

A lightweight, dependency free Python class that acts as wrapper for Crawlbase API.

Installing

Choose a way of installing:

  • Download the python class from Github.
  • Or use PyPi Python package manager. pip install crawlbase

Then import the CrawlingAPI, ScraperAPI, etc as needed.

from crawlbase import CrawlingAPI, ScraperAPI, LeadsAPI, ScreenshotsAPI, StorageAPI

Crawling API

First initialize the CrawlingAPI class.

api = CrawlingAPI({ 'token': 'YOUR_CRAWLBASE_TOKEN' })

GET requests

Pass the url that you want to scrape plus any options from the ones available in the API documentation.

api.get(url, options = {})

Example:

response = api.get('https://www.facebook.com/britneyspears')
if response['status_code'] == 200:
    print(response['body'])

You can pass any options from Crawlbase API.

Example:

response = api.get('https://www.reddit.com/r/pics/comments/5bx4bx/thanks_obama/', {
    'user_agent': 'Mozilla/5.0 (Windows NT 6.2; rv:20.0) Gecko/20121202 Firefox/30.0',
    'format': 'json'
})
if response['status_code'] == 200:
    print(response['body'])

POST requests

Pass the url that you want to scrape, the data that you want to send which can be either a json or a string, plus any options from the ones available in the API documentation.

api.post(url, dictionary or string data, options = {})

Example:

response = api.post('https://producthunt.com/search', { 'text': 'example search' })
if response['status_code'] == 200:
    print(response['body'])

You can send the data as application/json instead of x-www-form-urlencoded by setting option post_content_type as json.

import json
response = api.post('https://httpbin.org/post', json.dumps({ 'some_json': 'with some value' }), { 'post_content_type': 'json' })
if response['status_code'] == 200:
    print(response['body'])

Javascript requests

If you need to scrape any website built with Javascript like React, Angular, Vue, etc. You just need to pass your javascript token and use the same calls. Note that only .get is available for javascript and not .post.

api = CrawlingAPI({ 'token': 'YOUR_JAVASCRIPT_TOKEN' })
response = api.get('https://www.nfl.com')
if response['status_code'] == 200:
    print(response['body'])

Same way you can pass javascript additional options.

response = api.get('https://www.freelancer.com', { 'page_wait': 5000 })
if response['status_code'] == 200:
    print(response['body'])

Original status

You can always get the original status and crawlbase status from the response. Read the Crawlbase documentation to learn more about those status.

response = api.get('https://craiglist.com')
print(response['headers']['original_status'])
print(response['headers']['pc_status'])

If you have questions or need help using the library, please open an issue or contact us.

Scraper API

The usage of the Scraper API is very similar, just change the class name to initialize.

scraper_api = ScraperAPI({ 'token': 'YOUR_NORMAL_TOKEN' })

response = scraper_api.get('https://www.amazon.com/DualSense-Wireless-Controller-PlayStation-5/dp/B08FC6C75Y/')
if response['status_code'] == 200:
    print(response['json']['name']) # Will print the name of the Amazon product

Leads API

To find email leads you can use the leads API, you can check the full API documentation if needed.

leads_api = LeadsAPI({ 'token': 'YOUR_NORMAL_TOKEN' })

response = leads_api.get_from_domain('microsoft.com')

if response['status_code'] == 200:
    print(response['json']['leads'])

Screenshots API

Initialize with your Screenshots API token and call the get method.

screenshots_api = ScreenshotsAPI({ 'token': 'YOUR_NORMAL_TOKEN' })
response = screenshots_api.get('https://www.apple.com')
if response['status_code'] == 200:
    print(response['headers']['success'])
    print(response['headers']['url'])
    print(response['headers']['remaining_requests'])
    print(response['file'])

or specifying a file path

screenshots_api = ScreenshotsAPI({ 'token': 'YOUR_NORMAL_TOKEN' })
response = screenshots_api.get('https://www.apple.com', { 'save_to_path': 'apple.jpg' })
if response['status_code'] == 200:
    print(response['headers']['success'])
    print(response['headers']['url'])
    print(response['headers']['remaining_requests'])
    print(response['file'])

or if you set store=true then screenshot_url is set in the returned headers

screenshots_api = ScreenshotsAPI({ 'token': 'YOUR_NORMAL_TOKEN' })
response = screenshots_api.get('https://www.apple.com', { 'store': 'true' })
if response['status_code'] == 200:
    print(response['headers']['success'])
    print(response['headers']['url'])
    print(response['headers']['remaining_requests'])
    print(response['file'])
    print(response['headers']['screenshot_url'])

Note that screenshots_api.get(url, options) method accepts an options

Storage API

Initialize the Storage API using your private token.

storage_api = StorageAPI({ 'token': 'YOUR_NORMAL_TOKEN' })

Pass the url that you want to get from Crawlbase Storage.

response = storage_api.get('https://www.apple.com')
if response['status_code'] == 200:
    print(response['headers']['original_status'])
    print(response['headers']['pc_status'])
    print(response['headers']['url'])
    print(response['headers']['rid'])
    print(response['headers']['stored_at'])
    print(response['body'])

or you can use the RID

response = storage_api.get('RID_REPLACE')
if response['status_code'] == 200:
    print(response['headers']['original_status'])
    print(response['headers']['pc_status'])
    print(response['headers']['url'])
    print(response['headers']['rid'])
    print(response['headers']['stored_at'])
    print(response['body'])

Note: One of the two RID or URL must be sent. So both are optional but it's mandatory to send one of the two.

Delete request

To delete a storage item from your storage area, use the correct RID

if storage_api.delete('RID_REPLACE'):
  print('delete success')
else:
  print('Unable to delete')

Bulk request

To do a bulk request with a list of RIDs, please send the list of rids as an array

response = storage_api.bulk(['RID1', 'RID2', 'RID3', ...])
if response['status_code'] == 200:
    for item in response['json']:
        print(item['original_status'])
        print(item['pc_status'])
        print(item['url'])
        print(item['rid'])
        print(item['stored_at'])
        print(item['body'])

RIDs request

To request a bulk list of RIDs from your storage area

rids = storage_api.rids()
print(rids)

You can also specify a limit as a parameter

storage_api.rids(100)

Total Count

To get the total number of documents in your storage area

total_count = storage_api.totalCount()
print(total_count)

Custom timeout

If you need to use a custom timeout, you can pass it to the class instance creation like the following:

api = CrawlingAPI({ 'token': 'TOKEN', 'timeout': 120 })

Timeout is in seconds.


Copyright 2023 Crawlbase

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

crawlbase-1.0.0.tar.gz (6.6 kB view details)

Uploaded Source

File details

Details for the file crawlbase-1.0.0.tar.gz.

File metadata

  • Download URL: crawlbase-1.0.0.tar.gz
  • Upload date:
  • Size: 6.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.15.0 pkginfo/1.8.3 requests/2.27.1 setuptools/41.2.0 requests-toolbelt/1.0.0 tqdm/4.64.1 CPython/2.7.18

File hashes

Hashes for crawlbase-1.0.0.tar.gz
Algorithm Hash digest
SHA256 9549d54cda8b4a34de6679d21bf00bfa3eb970745436e0d284154f901297f0c7
MD5 1ebb48aeccf689ff46fdc4b1c8db9a26
BLAKE2b-256 14ee6c82dcbadd0b973845b60473d5d62d4256b30f50d4b3d57a3c38d2881596

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page