Skip to main content

Python SDK for Spider Cloud API

Project description

Spider Cloud Python SDK

The Spider Cloud Python SDK offers a toolkit for straightforward website scraping, crawling at scale, and other utilities like extracting links and taking screenshots, enabling you to collect data formatted for compatibility with language models (LLMs). It features a user-friendly interface for seamless integration with the Spider Cloud API.

Installation

To install the Spider Cloud Python SDK, you can use pip:

pip install spider_client

Usage

  1. Get an API key from spider.cloud
  2. Set the API key as an environment variable named SPIDER_API_KEY or pass it as a parameter to the Spider class.

Here's an example of how to use the SDK:

from spider import Spider

# Initialize the Spider with your API key
app = Spider(api_key='your_api_key')

# Scrape a single URL
url = 'https://spider.cloud'
scraped_data = app.scrape_url(url)

# Crawl a website
crawler_params = {
    'limit': 1,
    'proxy_enabled': True,
    'metadata': False,
    'request': 'http'
}
crawl_result = app.crawl_url(url, params=crawler_params)

Scraping a URL

To scrape data from a single URL:

url = 'https://example.com'
scraped_data = app.scrape_url(url)

Crawling a Website

To automate crawling a website:

url = 'https://example.com'
crawl_params = {
    'limit': 200,
    'request': 'smart_mode'
}
crawl_result = app.crawl_url(url, params=crawl_params)

Crawl Streaming

Stream crawl the website in chunks to scale.

    def handle_json(json_obj: dict) -> None:
        assert json_obj["url"] is not None

    url = 'https://example.com'
    crawl_params = {
        'limit': 200,
    }
    response = app.crawl_url(
        url,
        params=params,
        stream=True,
        callback=handle_json,
    )

Search

Perform a search for websites to crawl or gather search results:

query = 'a sports website'
crawl_params = {
    'request': 'smart_mode',
    'search_limit': 5,
    'limit': 5,
    'fetch_page_content': True
}
crawl_result = app.search(query, params=crawl_params)

Retrieving Links from a URL(s)

Extract all links from a specified URL:

url = 'https://example.com'
links = app.links(url)

Transform

Transform HTML to markdown or text lightning fast:

data = [ { 'html': '<html><body><h1>Hello world</h1></body></html>' } ]
params = {
    'readability': False,
    'return_format': 'markdown',
}
result = app.transform(data, params=params)

Taking Screenshots of a URL(s)

Capture a screenshot of a given URL:

url = 'https://example.com'
screenshot = app.screenshot(url)

Checking Available Credits

You can check the remaining credits on your account:

credits = app.get_credits()

Unblocker

Access blocked or protected content with anti-bot bypass:

url = 'https://protected-site.com'
result = app.unblocker(url)

Unblocker with AI Extraction

Unblock and extract structured data using AI:

url = 'https://protected-site.com/products'
result = app.unblocker(url, params={
    'custom_prompt': 'Extract all product names and prices as JSON'
})
# Extracted data is available in result[0]['metadata']['extracted_data']

Unblocker with JSON Schema Extraction

Use JSON Schema for structured, validated extraction output:

url = 'https://protected-site.com/products'
result = app.unblocker(url, params={
    'extraction_schema': {
        'name': 'products',
        'description': 'Product listing extraction',
        'schema': '''{
            "type": "object",
            "properties": {
                "products": {
                    "type": "array",
                    "items": {
                        "type": "object",
                        "properties": {
                            "name": {"type": "string"},
                            "price": {"type": "number"}
                        },
                        "required": ["name", "price"]
                    }
                }
            }
        }''',
        'strict': True
    }
})
# Extracted data conforms to the schema in result[0]['metadata']['extracted_data']

AI Studio Methods

AI Studio methods require an active AI Studio subscription. See spider.cloud/ai/pricing for plans.

AI Crawl

AI-guided crawling using natural language prompts:

result = app.ai_crawl(
    url='https://example.com',
    prompt='Find all blog posts and extract titles and summaries'
)

AI Scrape

AI-guided scraping using natural language prompts:

result = app.ai_scrape(
    url='https://example.com/products',
    prompt='Extract all product names, prices, and descriptions'
)

AI Search

AI-enhanced web search using natural language:

result = app.ai_search(prompt='Find the best Python web scraping libraries')

AI Browser

AI-guided browser automation:

result = app.ai_browser(
    url='https://example.com/login',
    prompt='Click the sign in button and fill the email field with test@example.com'
)

AI Links

AI-guided link extraction and filtering:

result = app.ai_links(
    url='https://example.com',
    prompt='Find all links to product pages and documentation'
)

Streaming

If you need to stream the request use the third param:

url = 'https://example.com'

crawler_params = {
    'limit': 1,
    'proxy_enabled': True,
    'metadata': False,
    'request': 'http'
}

links = app.links(url, crawler_params, True)

Content-Type

The following Content-type headers are supported using the fourth param:

  1. application/json
  2. text/csv
  3. application/xml
  4. application/jsonl
url = 'https://example.com'

crawler_params = {
    'limit': 1,
    'proxy_enabled': True,
    'metadata': False,
    'request': 'http'
}

# stream json lines back to the client
links = app.crawl(url, crawler_params, True, "application/jsonl")

Error Handling

The SDK handles errors returned by the Spider Cloud API and raises appropriate exceptions. If an error occurs during a request, an exception will be raised with a descriptive error message.

Contributing

Contributions to the Spider Cloud Python SDK are welcome! If you find any issues or have suggestions for improvements, please open an issue or submit a pull request on the GitHub repository.

License

The Spider Cloud Python SDK is open-source and released under the MIT License.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

spider_client-0.1.87.tar.gz (18.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

spider_client-0.1.87-py3-none-any.whl (16.7 kB view details)

Uploaded Python 3

File details

Details for the file spider_client-0.1.87.tar.gz.

File metadata

  • Download URL: spider_client-0.1.87.tar.gz
  • Upload date:
  • Size: 18.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.5

File hashes

Hashes for spider_client-0.1.87.tar.gz
Algorithm Hash digest
SHA256 decc02b4ec55f209e85d4bdb131de7e19e76887416ed48525b0eb234e5e13231
MD5 9fef8623d2f58d168b0faa41c3d8d97c
BLAKE2b-256 b37fee34095d3c260081c05850566375cbb49b8c5f0c16fe5341060fb4597e42

See more details on using hashes here.

File details

Details for the file spider_client-0.1.87-py3-none-any.whl.

File metadata

  • Download URL: spider_client-0.1.87-py3-none-any.whl
  • Upload date:
  • Size: 16.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.5

File hashes

Hashes for spider_client-0.1.87-py3-none-any.whl
Algorithm Hash digest
SHA256 72214310f8bd152aff3c3dbe761cf327b1d4a51a4a600e37e49271a39e1e66db
MD5 26f1b344196734d01942a042e317a50f
BLAKE2b-256 efb1e2243fc5e21c5bb744d8a951bc9e5d67742cf51bbf7cff3cba0d1d351307

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page