Skip to main content

Search engine result page scraper

Project description

https://travis-ci.org/jadbin/serlist.svg?branch=master https://coveralls.io/repos/github/jadbin/serlist/badge.svg?branch=master https://img.shields.io/badge/license-Apache2-blue.svg

Overview

SERList is used to scrap the information from a search engine results page including:

  • title

  • link

  • description

Now, SERList can well deal with the results from these search engines without setting anything (e.g. XPath):

Installation

Install using pip:

pip install serlist

Basic Usage

from serlist import SerpScraper

SerpScraper().scrap(text)

The variable text is the text of a search engine result page.

Documentation

https://serlist.readthedocs.io/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

serlist-0.1.0.tar.gz (12.2 kB view details)

Uploaded Source

File details

Details for the file serlist-0.1.0.tar.gz.

File metadata

  • Download URL: serlist-0.1.0.tar.gz
  • Upload date:
  • Size: 12.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: Python-urllib/3.6

File hashes

Hashes for serlist-0.1.0.tar.gz
Algorithm Hash digest
SHA256 4957b5663d65a283dbb2cb730105996c03f6fd5a634ecf8fbae0d9597f633aed
MD5 744d14d637ec81ee11c72bf81de5a48a
BLAKE2b-256 e991a6b9974f24d8c83566a8e8e6a1e69e8de80231afa259c765be545de2ce4c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page