Skip to main content

SEO python scraper to extract data from major searchengine result pages. Extract data like url, title, snippet, richsnippet and the type from searchresults for given keywords. Detect Ads or make automated screenshots. You can also fetch text content of urls provided in searchresults or by your own. It's usefull for SEO and business related research tasks.

Project description

https://img.shields.io/pypi/v/SerpScrap.svg Documentation Status https://travis-ci.org/ecoron/SerpScrap.svg?branch=master https://img.shields.io/docker/pulls/ecoron/serpscrap.svg

SEO python scraper to extract data from major searchengine result pages. Extract data like url, title, snippet, richsnippet and the type from searchresults for given keywords. Detect Ads or make automated screenshots. You can also fetch text content of urls provided in searchresults or by your own. It’s usefull for SEO and business related research tasks.

Extract these result types

  • ads_main - advertisements within regular search results

  • image - result from image search

  • news - news teaser within regular search results

  • results - standard search result

  • shopping - shopping teaser within regular search results

  • videos - video teaser within regular search results

For each result of a resultspage get

  • domain

  • rank

  • rich snippet

  • site links

  • snippet

  • title

  • type

  • url

  • visible url

Also get a screenshot of each result page. You can also scrape the text content of each result url. It is also possible to save the results as CSV for future analytics. If required you can also use your own proxylist.

Ressources

See http://serpscrap.readthedocs.io/en/latest/ for documentation.

Source is available at https://github.com/ecoron/SerpScrap

Install

The easy way to do:

pip uninstall SerpScrap -y
pip install SerpScrap --upgrade

More details in the install [1] section of the documentation.

Usage

SerpScrap in your applications

 #!/usr/bin/python3
 # -*- coding: utf-8 -*-
 import pprint
 import serpscrap

 keywords = ['example']

 config = serpscrap.Config()
 config.set('scrape_urls', False)

 scrap = serpscrap.SerpScrap()
 scrap.init(config=config.get(), keywords=keywords)
results = scrap.run()

 for result in results:
     pprint.pprint(result)

More detailes in the examples [2] section of the documentation.

To avoid encode/decode issues use this command before you start using SerpScrap in your cli.

chcp 65001
set PYTHONIOENCODING=utf-8
https://raw.githubusercontent.com/ecoron/SerpScrap/master/docs/logo.png

Supported OS

  • SerpScrap should work on Linux, Windows and Mac OS with installed Python >= 3.4

  • SerpScrap requieres lxml

  • Doesn’t work on iOS

Changes

Notes about major changes between releases

0.13.0

  • updated dependencies: chromedriver >= 76.0.3809.68 to use actual driver, sqlalchemy>=1.3.7 to solve security issues and other minor update changes

  • minor changes install_chrome.sh

0.12.0

I recommend an update to the latest version of SerpScrap, because the searchengine has updated the markup of search result pages(serp)

  • Update and cleanup of selectors to fetch results

  • new resulttype videos

0.11.0

  • Chrome headless is now the default browser, usage of phantomJS is deprecated

  • chromedriver is installed on the first run (tested on Linux and Windows. Mac OS should also work)

  • behavior of scraping raw text contents from serp urls, and of course given urls, has changed

  • run scraping of serp results and contents at once

  • csv output format changed, now it’s tab separated and quoted

0.10.0

  • support for headless chrome, adjusted default time between scrapes

0.9.0

  • result types added (news, shopping, image)

  • Image search is supported

0.8.0

  • text processing tools removed.

  • less requirements

References

SerpScrap is using Chrome headless [3] and lxml [4] to scrape serp results. For raw text contents of fetched URL’s, it is using beautifulsoup4 [5] . SerpScrap also supports PhantomJs [6] ,which is deprecated, a scriptable headless WebKit, which is installed automaticly on the first run (Linux, Windows). The scrapcore was based on GoogleScraper [7] , an outdated project, and has many changes and improvemts.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

SerpScrap-0.13.0.tar.gz (37.4 kB view details)

Uploaded Source

Built Distributions

SerpScrap-0.13.0-py3.7.egg (100.4 kB view details)

Uploaded Source

SerpScrap-0.13.0-py3-none-any.whl (45.5 kB view details)

Uploaded Python 3

File details

Details for the file SerpScrap-0.13.0.tar.gz.

File metadata

  • Download URL: SerpScrap-0.13.0.tar.gz
  • Upload date:
  • Size: 37.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.11.0 pkginfo/1.4.2 requests/2.20.1 setuptools/40.2.0 requests-toolbelt/0.8.0 tqdm/4.25.0 CPython/3.7.0

File hashes

Hashes for SerpScrap-0.13.0.tar.gz
Algorithm Hash digest
SHA256 d85cd4755245b61ccaecab84237db204bd9a76a7e7ee156ccf83641c84a054ae
MD5 1f9e7522e151d30dd1ccad95191830d6
BLAKE2b-256 d7d3ac73e8ace3cc42fdc2c6cfb2a617f224d36c548cc62fed2d2c14a51197c2

See more details on using hashes here.

File details

Details for the file SerpScrap-0.13.0-py3.7.egg.

File metadata

  • Download URL: SerpScrap-0.13.0-py3.7.egg
  • Upload date:
  • Size: 100.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.11.0 pkginfo/1.4.2 requests/2.20.1 setuptools/40.2.0 requests-toolbelt/0.8.0 tqdm/4.25.0 CPython/3.7.0

File hashes

Hashes for SerpScrap-0.13.0-py3.7.egg
Algorithm Hash digest
SHA256 7926bd85da7c4fd51968a17ff8c92ef43b79b73fcf479f917dfdd96b7ad85406
MD5 1bcca1fe1cc96f1ebb42ce9c7001274b
BLAKE2b-256 b0f0a2772951255398510ba37d5eb5691704fb090ffba158194908a8053ec5d7

See more details on using hashes here.

File details

Details for the file SerpScrap-0.13.0-py3-none-any.whl.

File metadata

  • Download URL: SerpScrap-0.13.0-py3-none-any.whl
  • Upload date:
  • Size: 45.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.11.0 pkginfo/1.4.2 requests/2.20.1 setuptools/40.2.0 requests-toolbelt/0.8.0 tqdm/4.25.0 CPython/3.7.0

File hashes

Hashes for SerpScrap-0.13.0-py3-none-any.whl
Algorithm Hash digest
SHA256 0f7dd9766858b3353a117e4d72f140a61a66f34139caa6b16e9656b23cfbdc87
MD5 1ddf6c4c0b578a1d3f32909898848545
BLAKE2b-256 156262df665e9ccb20ea72597713a835d29347df2aeb161a02ffda5ad969559f

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page