Skip to main content

Light-weight, dependency-free Python SDK for the FetchSERP API

Project description

FetchSERP Python SDK

PyPI License

A lightweight, dependency-free (except for requests) Python wrapper around the FetchSERP API.

With a single class (FetchSERPClient) you can:

  • Retrieve live search-engine result pages (SERPs) in multiple formats (raw, HTML, JS-rendered, text).
  • Analyse keyword & domain performance (search volume, ranking, Moz metrics, etc.).
  • Scrape web pages (static or headless/JS, with or without proxy).
  • Run on-page SEO / AI analyses.
  • Inspect backlinks, emails, DNS, WHOIS, SSL and technology stacks.

Installation

python -m pip install fetchserp

Only the requests package is installed; no heavy dependencies.


Quick start

from fetchserp import FetchSERPClient

API_KEY = "YOUR_SECRET_API_KEY"

with FetchSERPClient(API_KEY) as fs:
    serp = fs.get_serp(query="python asyncio", pages_number=2)
    print(serp["data"]["results_count"], "results fetched")

The client raises fetchserp.client.FetchSERPError on any non-2xx response for easy error-handling.


Authentication

All endpoints require a Bearer token. Pass your key when constructing the client:

fs = FetchSERPClient("BEARER_TOKEN")

The SDK automatically adds Authorization: Bearer <token> to every request.


Endpoints & SDK mapping

SDK Method HTTP Path Description
get_backlinks GET /api/v1/backlinks Backlinks for a domain
get_domain_emails GET /api/v1/domain_emails Emails discovered on a domain
get_domain_info GET /api/v1/domain_infos DNS, WHOIS, SSL & stack
get_keywords_search_volume GET /api/v1/keywords_search_volume Google Ads search volume
get_keywords_suggestions GET /api/v1/keywords_suggestions Keyword ideas by URL or seed list
generate_long_tail_keywords GET /api/v1/long_tail_keywords_generator Long-tail keyword generator
get_moz_domain_analysis GET /api/v1/moz Moz domain authority metrics
check_page_indexation GET /api/v1/page_indexation Checks if a URL is indexed for a keyword
get_domain_ranking GET /api/v1/ranking Ranking position of a domain for a keyword
scrape_page GET /api/v1/scrape Static scrape (no JS)
scrape_domain GET /api/v1/scrape_domain Crawl multiple pages of a domain
scrape_page_js POST /api/v1/scrape_js Run custom JS & scrape
scrape_page_js_with_proxy POST /api/v1/scrape_js_with_proxy JS scrape using residential proxy
get_serp GET /api/v1/serp SERP (static)
get_serp_html GET /api/v1/serp_html SERP with full HTML
start_serp_js_job GET /api/v1/serp_js Launch JS-rendered SERP job (returns UUID)
get_serp_js_result GET /api/v1/serp_js/{uuid} Poll job result
get_serp_ai_mode GET /api/v1/serp_ai_mode SERP with AI Overview & AI Mode (fast, <30s)
get_serp_text GET /api/v1/serp_text SERP + extracted text
get_user GET /api/v1/user Authenticated user & credits
get_web_page_ai_analysis GET /api/v1/web_page_ai_analysis AI-powered page analysis
get_web_page_seo_analysis GET /api/v1/web_page_seo_analysis Full SEO audit

Examples

1. Long-tail keyword ideas

ideas = fs.generate_long_tail_keywords(keyword="electric cars", count=25)

2. JS-rendered SERP with AI overview

job = fs.start_serp_js_job(query="best coffee makers", country="us")
result = fs.get_serp_js_result(uuid=job["data"]["uuid"])
print(result["data"]["results"][0]["ai_overview"]["content"])

3. Fast AI Overview & AI Mode (single call)

result = fs.get_serp_ai_mode(query="how to learn python programming")
print(result["data"]["results"][0]["ai_overview"]["content"])
print(result["data"]["results"][0]["ai_mode_response"]["content"])

4. Scrape a page with custom JavaScript

payload = {
    "url": "https://fetchserp.com",
    "js_script": "return { title: document.title, h1: document.querySelector('h1').innerText };"
}
print(fs.scrape_page_js(**payload))

Contributing

Pull requests are welcome! Please open an issue first to discuss major changes.


License

GPL-3.0-or-later. See the LICENSE file for full text.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fetchserp-0.1.2.tar.gz (5.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fetchserp-0.1.2-py3-none-any.whl (6.1 kB view details)

Uploaded Python 3

File details

Details for the file fetchserp-0.1.2.tar.gz.

File metadata

  • Download URL: fetchserp-0.1.2.tar.gz
  • Upload date:
  • Size: 5.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.5

File hashes

Hashes for fetchserp-0.1.2.tar.gz
Algorithm Hash digest
SHA256 f4cef201c2ee2d4071d99b8edee97c40c055d033f9687d8dbd6d9b2c9e7b5149
MD5 b93138f7c31db41b1eaee56b448df42b
BLAKE2b-256 ea9c90b6e2f9c6f5cc4335c52c82365899065d7a3eceaf50a8a299a4f829e0cf

See more details on using hashes here.

File details

Details for the file fetchserp-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: fetchserp-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 6.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.5

File hashes

Hashes for fetchserp-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 cfc26b999e751596dfe181596db150135b048e5e0383f9bab68638f75d8466f9
MD5 5b41bd5eb93eabaedfd992f60ff7957b
BLAKE2b-256 d07cd025f98fd1d2bab55586aea34ad0dc0bbcda98643479bc094f4763d9c75f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page