SEO python scraper to extract data from major searchengine result pages. Extract data like url, title, snippet, richsnippet and the type from searchresults for given keywords. Detect Ads or make automated screenshots. You can also fetch text content of urls provided in searchresults or by your own. It's usefull for SEO and business related research tasks.
SEO python scraper to extract data from major searchengine result pages. Extract data like url, title, snippet, richsnippet and the type from searchresults for given keywords. Detect Ads or make automated screenshots. You can also fetch text content of urls provided in searchresults or by your own. It’s usefull for SEO and business related research tasks.
Extract these result types
ads_main - advertisements within regular search results
image - result from image search
news - news teaser within regular search results
results - standard search result
shopping - shopping teaser within regular search results
videos - video teaser within regular search results
For each result of a resultspage get
Also get a screenshot of each result page. You can also scrape the text content of each result url. It is also possible to save the results as CSV for future analytics. If required you can also use your own proxylist.
See http://serpscrap.readthedocs.io/en/latest/ for documentation.
Source is available at https://github.com/ecoron/SerpScrap
The easy way to do:
pip uninstall SerpScrap -y pip install SerpScrap --upgrade
More details in the install  section of the documentation.
SerpScrap in your applications
#!/usr/bin/python3 # -*- coding: utf-8 -*- import pprint import serpscrap keywords = ['example'] config = serpscrap.Config() config.set('scrape_urls', False) scrap = serpscrap.SerpScrap() scrap.init(config=config.get(), keywords=keywords) results = scrap.run() for result in results: pprint.pprint(result)
More detailes in the examples  section of the documentation.
To avoid encode/decode issues use this command before you start using SerpScrap in your cli.
chcp 65001 set PYTHONIOENCODING=utf-8
SerpScrap should work on Linux, Windows and Mac OS with installed Python >= 3.4
SerpScrap requieres lxml
Doesn’t work on iOS
Notes about major changes between releases
updated dependencies: chromedriver >= 76.0.3809.68 to use actual driver, sqlalchemy>=1.3.7 to solve security issues and other minor update changes
minor changes install_chrome.sh
I recommend an update to the latest version of SerpScrap, because the searchengine has updated the markup of search result pages(serp)
Update and cleanup of selectors to fetch results
new resulttype videos
Chrome headless is now the default browser, usage of phantomJS is deprecated
chromedriver is installed on the first run (tested on Linux and Windows. Mac OS should also work)
behavior of scraping raw text contents from serp urls, and of course given urls, has changed
run scraping of serp results and contents at once
csv output format changed, now it’s tab separated and quoted
support for headless chrome, adjusted default time between scrapes
result types added (news, shopping, image)
Image search is supported
text processing tools removed.
SerpScrap is using Chrome headless  and lxml  to scrape serp results. For raw text contents of fetched URL’s, it is using beautifulsoup4  . SerpScrap also supports PhantomJs  ,which is deprecated, a scriptable headless WebKit, which is installed automaticly on the first run (Linux, Windows). The scrapcore was based on GoogleScraper  , an outdated project, and has many changes and improvemts.
Release history Release notifications | RSS feed
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Hashes for SerpScrap-0.13.0-py3-none-any.whl