Skip to main content

Light-weight, dependency-free Python SDK for the FetchSERP API

Project description

FetchSERP Python SDK

PyPI License

A lightweight, dependency-free (except for requests) Python wrapper around the FetchSERP API.

With a single class (FetchSERPClient) you can:

  • Retrieve live search-engine result pages (SERPs) in multiple formats (raw, HTML, JS-rendered, text).
  • Analyse keyword & domain performance (search volume, ranking, Moz metrics, etc.).
  • Scrape web pages (static or headless/JS, with or without proxy).
  • Run on-page SEO / AI analyses.
  • Inspect backlinks, emails, DNS, WHOIS, SSL and technology stacks.

Installation

python -m pip install fetchserp

Only the requests package is installed; no heavy dependencies.


Quick start

from fetchserp import FetchSERPClient

API_KEY = "YOUR_SECRET_API_KEY"

with FetchSERPClient(API_KEY) as fs:
    serp = fs.get_serp(query="python asyncio", pages_number=2)
    print(serp["data"]["results_count"], "results fetched")

The client raises fetchserp.client.FetchSERPError on any non-2xx response for easy error-handling.


Authentication

All endpoints require a Bearer token. Pass your key when constructing the client:

fs = FetchSERPClient("BEARER_TOKEN")

The SDK automatically adds Authorization: Bearer <token> to every request.


Endpoints & SDK mapping

SDK Method HTTP Path Description
get_backlinks GET /api/v1/backlinks Backlinks for a domain
get_domain_emails GET /api/v1/domain_emails Emails discovered on a domain
get_domain_info GET /api/v1/domain_infos DNS, WHOIS, SSL & stack
get_keywords_search_volume GET /api/v1/keywords_search_volume Google Ads search volume
get_keywords_suggestions GET /api/v1/keywords_suggestions Keyword ideas by URL or seed list
generate_long_tail_keywords GET /api/v1/long_tail_keywords_generator Long-tail keyword generator
get_moz_domain_analysis GET /api/v1/moz Moz domain authority metrics
check_page_indexation GET /api/v1/page_indexation Checks if a URL is indexed for a keyword
get_domain_ranking GET /api/v1/ranking Ranking position of a domain for a keyword
scrape_page GET /api/v1/scrape Static scrape (no JS)
scrape_domain GET /api/v1/scrape_domain Crawl multiple pages of a domain
scrape_page_js POST /api/v1/scrape_js Run custom JS & scrape
scrape_page_js_with_proxy POST /api/v1/scrape_js_with_proxy JS scrape using residential proxy
get_serp GET /api/v1/serp SERP (static)
get_serp_html GET /api/v1/serp_html SERP with full HTML
start_serp_js_job GET /api/v1/serp_js Launch JS-rendered SERP job (returns UUID)
get_serp_js_result GET /api/v1/serp_js/{uuid} Poll job result
get_serp_ai_mode GET /api/v1/serp_ai_mode SERP with AI Overview & AI Mode (fast, <30s)
get_serp_text GET /api/v1/serp_text SERP + extracted text
get_user GET /api/v1/user Current user info + credit balance
get_webpage_ai_analysis GET /api/v1/webpage_ai_analysis Custom AI analysis of any webpage
get_playwright_mcp GET /api/v1/playwright_mcp GPT-4.1 browser automation via Playwright MCP
get_webpage_seo_analysis GET /api/v1/webpage_seo_analysis Full on-page SEO audit

Examples

1. Long-tail keyword ideas

ideas = fs.generate_long_tail_keywords(keyword="electric cars", count=25)

2. JS-rendered SERP with AI overview

job = fs.start_serp_js_job(query="best coffee makers", country="us")
result = fs.get_serp_js_result(uuid=job["data"]["uuid"])
print(result["data"]["results"][0]["ai_overview"]["content"])

3. Fast AI Overview & AI Mode (single call)

result = fs.get_serp_ai_mode(query="how to learn python programming")
print(result["data"]["results"][0]["ai_overview"]["content"])
print(result["data"]["results"][0]["ai_mode_response"]["content"])

4. Scrape a page with custom JavaScript

payload = {
    "url": "https://fetchserp.com",
    "js_script": "return { title: document.title, h1: document.querySelector('h1')?.textContent };"
}
result = fs.scrape_webpage_js(**payload)

5. Automate browser tasks with AI

result = fs.get_playwright_mcp(prompt="Navigate to github.com and search for 'python selenium'")
print(result["data"]["response"])

6. Comprehensive SEO audit

result = fs.get_webpage_seo_analysis(url="https://fetchserp.com")
print(result["data"]["summary"])

Contributing

Pull requests are welcome! Please open an issue first to discuss major changes.


License

GPL-3.0-or-later. See the LICENSE file for full text.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fetchserp-0.3.0.tar.gz (6.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fetchserp-0.3.0-py3-none-any.whl (6.3 kB view details)

Uploaded Python 3

File details

Details for the file fetchserp-0.3.0.tar.gz.

File metadata

  • Download URL: fetchserp-0.3.0.tar.gz
  • Upload date:
  • Size: 6.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.5

File hashes

Hashes for fetchserp-0.3.0.tar.gz
Algorithm Hash digest
SHA256 5cd4d70d9be2fdd14f6fe970f953493166fb7a59ab529f10f7de3d04e42930f5
MD5 f9db5406e8a9ac60e2e146c240d1f02b
BLAKE2b-256 df264d8f2a6e532352f26d32d84b6e31364dd7651817779aeb028836f29965d6

See more details on using hashes here.

File details

Details for the file fetchserp-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: fetchserp-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 6.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.5

File hashes

Hashes for fetchserp-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 4ede255d948d1584728c0fa5da1186a82c9440f539aa2de6b19524318b8aa914
MD5 38ffe3407867af1e8725339aae281fcd
BLAKE2b-256 a0677c4276595cc2495cb9a6e3bedcfdcd7949be9f553c117fe60107692fedcd

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page