Skip to main content

Light-weight, dependency-free Python SDK for the FetchSERP API

Project description

FetchSERP Python SDK

PyPI License

A lightweight, dependency-free (except for requests) Python wrapper around the FetchSERP API.

With a single class (FetchSERPClient) you can:

  • Retrieve live search-engine result pages (SERPs) in multiple formats (raw, HTML, JS-rendered, text).
  • Analyse keyword & domain performance (search volume, ranking, Moz metrics, etc.).
  • Scrape web pages (static or headless/JS, with or without proxy).
  • Run on-page SEO / AI analyses.
  • Inspect backlinks, emails, DNS, WHOIS, SSL and technology stacks.

Installation

python -m pip install fetchserp

Only the requests package is installed; no heavy dependencies.


Quick start

from fetchserp import FetchSERPClient

API_KEY = "YOUR_SECRET_API_KEY"

with FetchSERPClient(API_KEY) as fs:
    serp = fs.get_serp(query="python asyncio", pages_number=2)
    print(serp["data"]["results_count"], "results fetched")

The client raises fetchserp.client.FetchSERPError on any non-2xx response for easy error-handling.


Authentication

All endpoints require a Bearer token. Pass your key when constructing the client:

fs = FetchSERPClient("BEARER_TOKEN")

The SDK automatically adds Authorization: Bearer <token> to every request.


Endpoints & SDK mapping

SDK Method HTTP Path Description
get_backlinks GET /api/v1/backlinks Backlinks for a domain
get_domain_emails GET /api/v1/domain_emails Emails discovered on a domain
get_domain_info GET /api/v1/domain_infos DNS, WHOIS, SSL & stack
get_keywords_search_volume GET /api/v1/keywords_search_volume Google Ads search volume
get_keywords_suggestions GET /api/v1/keywords_suggestions Keyword ideas by URL or seed list
generate_long_tail_keywords GET /api/v1/long_tail_keywords_generator Long-tail keyword generator
get_moz_domain_analysis GET /api/v1/moz Moz domain authority metrics
check_page_indexation GET /api/v1/page_indexation Checks if a URL is indexed for a keyword
get_domain_ranking GET /api/v1/ranking Ranking position of a domain for a keyword
scrape_page GET /api/v1/scrape Static scrape (no JS)
scrape_domain GET /api/v1/scrape_domain Crawl multiple pages of a domain
scrape_page_js POST /api/v1/scrape_js Run custom JS & scrape
scrape_page_js_with_proxy POST /api/v1/scrape_js_with_proxy JS scrape using residential proxy
get_serp GET /api/v1/serp SERP (static)
get_serp_html GET /api/v1/serp_html SERP with full HTML
start_serp_js_job GET /api/v1/serp_js Launch JS-rendered SERP job (returns UUID)
get_serp_js_result GET /api/v1/serp_js/{uuid} Poll job result
get_serp_text GET /api/v1/serp_text SERP + extracted text
get_user GET /api/v1/user Authenticated user & credits
get_web_page_ai_analysis GET /api/v1/web_page_ai_analysis AI-powered page analysis
get_web_page_seo_analysis GET /api/v1/web_page_seo_analysis Full SEO audit

Examples

1. Long-tail keyword ideas

ideas = fs.generate_long_tail_keywords(keyword="electric cars", count=25)

2. JS-rendered SERP with AI overview

job = fs.start_serp_js_job(query="best coffee makers", country="us")
result = fs.get_serp_js_result(uuid=job["data"]["uuid"])
print(result["data"]["results"][0]["ai_overview"]["content"])

3. Scrape a page with custom JavaScript

payload = {
    "url": "https://fetchserp.com",
    "js_script": "return { title: document.title, h1: document.querySelector('h1').innerText };"
}
print(fs.scrape_page_js(**payload))

Contributing

Pull requests are welcome! Please open an issue first to discuss major changes.


License

GPL-3.0-or-later. See the LICENSE file for full text.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fetchserp-0.1.1.tar.gz (5.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fetchserp-0.1.1-py3-none-any.whl (6.0 kB view details)

Uploaded Python 3

File details

Details for the file fetchserp-0.1.1.tar.gz.

File metadata

  • Download URL: fetchserp-0.1.1.tar.gz
  • Upload date:
  • Size: 5.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.4

File hashes

Hashes for fetchserp-0.1.1.tar.gz
Algorithm Hash digest
SHA256 98b856b5bfa5c2edddc1b7f1bec8dfc05b508acf65a537b79c1f639106d040c4
MD5 ae18ddb952f7a1d4f5f43237237201d7
BLAKE2b-256 ea1f83f7c273de5d2b98bd181168769a556eceadc74cadde9434ba4427c07230

See more details on using hashes here.

File details

Details for the file fetchserp-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: fetchserp-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 6.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.4

File hashes

Hashes for fetchserp-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 20d1ac59300ac36f6a5371a621972a5c4c37d9d76df604e452defa45c23ff1bb
MD5 db7e74d60696006bf291828429664867
BLAKE2b-256 3ada21642f07265f9c15f76f97c0397e5c6a2ff9c03398186c0b8c25ac904a0f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page