Skip to main content

A python scraper to extract and analyze data from search engine result pages and urls. Extract data, like url, title, snippet of results or ratings for given keywords.

Project description

https://img.shields.io/pypi/v/SerpScrap.svg Documentation Status https://travis-ci.org/ecoron/SerpScrap.svg?branch=master

A python scraper to extract, analyze data from search engine result pages and urls. It might be usefull for SEO and research tasks. Also some text processing tools are available.

  • Extract position, url, title, description, related keywords and other details of searchresults for the given keywords.

  • use a list of proxies for scraping.

  • scrape also the origin url of the searchresult, the cleaned raw text content from this url is extracted.

  • save results as csv for future analytics

  • use some text processing tools like tfidf analyzer or a markovy a text generator to generate new sentences.

See http://serpscrap.readthedocs.io/en/latest/ for documentation.

Source is available at https://github.com/ecoron/SerpScrap.

Install

The easy way to do:

pip uninstall SerpScrap -y
pip install SerpScrap --upgrade

In some cases it is required to install python-scipy first

sudo apt-get build-dep python-scipy

More details in the install [1] section of the documentation.

Usage

SerpScrap in your applications

 #!/usr/bin/python3
 # -*- coding: utf-8 -*-
 import pprint
 import serpscrap

 keywords = ['example']

 config = serpscrap.Config()
 config.set('scrape_urls', False)

 scrap = serpscrap.SerpScrap()
 scrap.init(config=config.get(), keywords=keywords)
results = scrap.run()

 for result in results:
     pprint.pprint(result)

More detailes in the examples [2] section of the documentation.

To avoid encode/decode issues use this command before you start using SerpScrap in your cli.

chcp 65001
set PYTHONIOENCODING=utf-8
https://raw.githubusercontent.com/ecoron/SerpScrap/master/docs/logo.png

References

SerpScrap is using PhantomJs [3] a scriptable headless WebKit, which is installed automaticly on the first run (Linux, Windows). The scrapcore is based on GoogleScraper [4] with several improvements.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

SerpScrap-0.7.0.tar.gz (36.9 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page