Skip to main content

this pip package is meant to scrape and parse Google results using SERP API. Feel free to fork this repository to add more backends.

Project description

Google Search Results in Python

Build Status

This Python package is meant to scrape and parse Google results using SERP API. The following services are provided:

Serp API provides a script builder to get you started quickly.

Feel free to fork this repository to add more backends.

Installation

Python 2.7 or 3.7

pip install google-search-results

Link to the python package page

Quick start

from lib.google_search_results import GoogleSearchResults
query = GoogleSearchResults({"q": "coffee", "location": "Austin,Texas"})
json_results = query.get_json()

This example runs a search about "coffee" using your secret api key.

The Serp API service (backend)

  • searches on Google using the client: q = "coffee"
  • parses the messy HTML responses
  • return a standardizes JSON response The Ruby class GoogleSearchResults
  • Format the request to Serp API server
  • Execute GET http request
  • Parse JSON into Ruby Hash using JSON standard library provided by Ruby Et voila..

Example

How to set SERP API key

The Serp API key can be set globally using a singleton pattern.

GoogleSearchResults.SERP_API_KEY = "Your Private Key"

The Serp API key can be provided for each client.

query = GoogleSearchResults({"q": "coffee", "serp_api_key": "Your Private Key"})

Search API capability

client_params = {
  "q": "client",
  "google_domain": "Google Domain", 
  "location": "Location Requested", 
  "device": "desktop|mobile|tablet",
  "hl": "Google UI Language",
  "gl": "Google Country",
  "safe": "Safe Search Flag",
  "num": "Number of Results",
  "start": "Pagination Offset",
  "serp_api_key": "Your SERP API Key",
  "tbm": "nws|isch|shop"
  "tbs": "custom to be search criteria"
  "async": true|false # allow async 
  "output": "json|html" # output format
}

# define the search client
query = GoogleSearchResults[query_params]

# override an existing parameter
query.params_dict["location"] = "Portland"

# search format return as raw html
html_results = query.get_html()

# search as raw JSON format
json_results = query.get_json()

# search as raw Dictionary format
json_results = query.get_dict()

(the full documentation)[https://serpapi.com/search-api]

see below for more hands on examples.

Example by specification

We love true open source, continuous integration and Test Drive Development (TDD). We are using RSpec to test our infrastructure around the clock to achieve the best QoS (Quality Of Service).

The directory test/ includes specification/examples.

Set your api key.

export API_KEY="your secret key"

Run test

make test

Location API

client = GoogleSearchResults({})
location_list = client.get_location("Austin", 3)
print(location_list)

it prints the first 3 location matching Austin (Texas, Texas, Rochester)

[   {   'canonical_name': 'Austin,TX,Texas,United States',
        'country_code': 'US',
        'google_id': 200635,
        'google_parent_id': 21176,
        'gps': [-97.7430608, 30.267153],
        'id': '585069bdee19ad271e9bc072',
        'keys': ['austin', 'tx', 'texas', 'united', 'states'],
        'name': 'Austin, TX',
        'reach': 5560000,
        'target_type': 'DMA Region'},
        ...]

Search Archive API

The search result are stored in temporary cached. The previous search can be retrieve from the the cache for free.

client = GoogleSearchResults({"q": "Coffee", "location": "Austin,Texas"})
search_result = client.get_dictionary()
search_id = search_result.get("search_metadata").get("id")
print(search_id)

Now let retrieve the previous search from the archive.

archived_search_result = GoogleSearchResults({}).get_search_archive(search_id, 'json')
print(archived_search_result.get("search_metadata").get("id"))

it prints the search result from the archive.

Account API

client = GoogleSearchResults({})
account = client.get_account()

it prints your account information.

Search Google Images

client = GoogleSearchResults({"q": "coffe", "tbm": "isch"})
for image_result in client.get_json()['images_results']:
    link = image_result["original"]
    try:
        print("link: " + link)
        # wget.download(link, '.')
    except:
        pass

this code prints all the images links, and download image if you un-comment the line with wget (linux/osx tool to download image).

This tutorial covers more ground on this topic. https://github.com/serpapi/showcase-serpapi-tensorflow-keras-image-training

Search Google News

client = GoogleSearchResults({
    "q": "coffe",   # search client
    "tbm": "nws",  # news
    "tbs": "qdr:d", # last 24h
    "num": 10
})
for offset in [0,1,2]:
    client.params_dict["start"] = offset * 10
    data = client.get_json()
    for news_result in data['news_results']:
        print(str(news_result['position'] + offset * 10) + " - " + news_result['title'])

this script prints the first 3 pages of the news title for the last 24h.

Search Google Shopping

client = GoogleSearchResults({
    "q": "coffe",   # search client
    "tbm": "shop",  # news
    "tbs": "p_ord:rv", # last 24h
    "num": 100
})
data = client.get_json()
for shopping_result in data['shopping_results']:
    print(shopping_result['position']) + " - " + shopping_result['title'])

this script prints all the shopping results order by review order.

Google Search By Location

With Serp API, we can build Google search from anywhere in the world. This code is looking for the best coffee shop per city.

for city in ["new york", "paris", "berlin"]:
  location = GoogleSearchResults({}).get_location(city, 1)[0]["canonical_name"]
  client = GoogleSearchResults({
      "q": "best coffee shop",   # search client
      "location": location,
      "num": 1,
      "start": 0
  })
  data = client.get_json()
  top_result = data["organic_results"][0]["title"]

Batch Asynchronous search

We do offer two ways to boost your searches thanks to async parameter.

  • Blocking - async=false - it's more compute intensive because the client would need to hold many connections. (default)
  • Non-blocking - async=true - it's way to go for large amount of query submitted by batch (recommended)

TODO add example in python

This code shows a simple implementation to run a batch of asynchronously searches.

Conclusion

SerpAPI supports Google Images, News, Shopping and more.. To enable a type of search, the field tbm (to be matched) must be set to:

  • isch: Google Images API.
  • nws: Google News API.
  • shop: Google Shopping API.
  • any other Google service should work out of the box.
  • (no tbm parameter): regular Google client.

The field tbs allows to customize the search even more.

The full documentation is available here.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

google_search_results-1.4.1.tar.gz (6.0 kB view details)

Uploaded Source

File details

Details for the file google_search_results-1.4.1.tar.gz.

File metadata

  • Download URL: google_search_results-1.4.1.tar.gz
  • Upload date:
  • Size: 6.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.21.0 setuptools/41.0.0 requests-toolbelt/0.9.1 tqdm/4.31.1 CPython/3.7.3

File hashes

Hashes for google_search_results-1.4.1.tar.gz
Algorithm Hash digest
SHA256 060ab526c97251516ae2fc59bb0465d9e1bd2a33fd12198e172cc443038bea3b
MD5 6887e1b775d8fadc83b060b1a1f51c01
BLAKE2b-256 f571e9f5e74428f21cefee0f917e4244ce32245d8162ac932880195259905860

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page