Skip to main content
Help us improve PyPI by participating in user testing. All experience levels needed!

A python scraper to extract and analyze data from search engine result pages and urls. Extract data, like url, title, snippet of results or ratings for given keywords.

Project description

Documentation Status https://travis-ci.org/ecoron/SerpScrap.svg?branch=master

SEO python scraper to extract and analyze data from major search engine serps or text content of any other url. Extract data like title, url, type, text- and richsnippet of searchresults for given keywords. detect ads, automated screenshots. It might be usefull for SEO and research tasks.

Extract these result types

  • ads_main - advertisements within regular search results
  • image - result from image search
  • news - news teaser within regular search results
  • results - standard search result
  • shopping - shopping teaser within regular search results

For each result in a resultspage get

  • domain
  • rank
  • rich snippet
  • site links
  • snippet
  • title
  • type
  • url
  • visible url

Also get a screenshot of each result page. You can also scrape the text content of each result url. It also possible to save the results as CSV for future analytics. If required you can use your own proxylist.

Ressources

See http://serpscrap.readthedocs.io/en/latest/ for documentation.

Source is available at https://github.com/ecoron/SerpScrap

Install

The easy way to do:

pip uninstall SerpScrap -y
pip install SerpScrap --upgrade

More details in the install [1] section of the documentation.

Usage

SerpScrap in your applications

 #!/usr/bin/python3
 # -*- coding: utf-8 -*-
 import pprint
 import serpscrap

 keywords = ['example']

 config = serpscrap.Config()
 config.set('scrape_urls', False)

 scrap = serpscrap.SerpScrap()
 scrap.init(config=config.get(), keywords=keywords)
results = scrap.run()

 for result in results:
     pprint.pprint(result)

More detailes in the examples [2] section of the documentation.

To avoid encode/decode issues use this command before you start using SerpScrap in your cli.

chcp 65001
set PYTHONIOENCODING=utf-8
https://raw.githubusercontent.com/ecoron/SerpScrap/master/docs/logo.png

Changes

Notes about major changes between releases

0.10.0

  • support for headless chrome, adjusted default time between scrapes

0.9.0

  • result types added (news, shopping, image)
  • Image search is supported

0.8.0

  • text processing tools removed.
  • less requirements

References

SerpScrap is using PhantomJs [3] a scriptable headless WebKit, which is installed automaticly on the first run (Linux, Windows). The scrapcore is based on GoogleScraper [4] with several improvements.

[1]http://serpscrap.readthedocs.io/en/latest/install.html
[2]http://serpscrap.readthedocs.io/en/latest/examples.html
[3]https://github.com/ariya/phantomjs
[4]https://github.com/NikolaiT/GoogleScraper

Project details


Release history Release notifications

History Node

0.10.2

This version
History Node

0.10.1

History Node

0.10.0

History Node

0.9.3

History Node

0.9.2

History Node

0.9.1

History Node

0.9.0

History Node

0.8.3

History Node

0.8.2

History Node

0.8.1

History Node

0.8.0

History Node

0.7.0

History Node

0.6.0

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Filename, size & hash SHA256 hash help File type Python version Upload date
SerpScrap-0.10.1-py2.py3-none-any.whl (44.6 kB) Copy SHA256 hash SHA256 Wheel py2.py3 Jan 13, 2018
SerpScrap-0.10.1-py3-none-any.whl (44.6 kB) Copy SHA256 hash SHA256 Wheel py3 Jan 13, 2018
SerpScrap-0.10.1.tar.gz (37.0 kB) Copy SHA256 hash SHA256 Source None Jan 13, 2018

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging CloudAMQP CloudAMQP RabbitMQ AWS AWS Cloud computing Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page