Skip to main content

Helpers to fetch & parse text on pages with requests, lxml, & beautifulsoup4

Project description


Install system requirements for lxml

% sudo apt-get install -y libxml2 libxslt1.1 libxml2-dev libxslt1-dev zlib1g-dev


% brew install libxml2

Install with pip

% pip3 install parse-helper


The ph-goo, ph-ddg, ph-download-files, ph-download-file-as, and ph-soup-explore scripts are provided

$ venv/bin/ph-goo --help
Usage: ph-goo [OPTIONS] [QUERY]

  Pass a search query to google

  --page INTEGER                  page number of results
  --since [|year|month|week|day]  limit results by time
  --site TEXT                     limit results by site/domain
  --filetype [|pdf|xls|ppt|doc|rtf]
                                  limit results by filetype
  --help                          Show this message and exit.

$ venv/bin/ph-ddg --help
Usage: ph-ddg [OPTIONS] [QUERY]

  Pass a search query to duckduckgo api

  --help  Show this message and exit.

$ venv/bin/ph-download-files --help
Usage: ph-download-files [OPTIONS] [ARGS]...

  Download all links to local files

  - args: urls or filenames containing urls

  --help  Show this message and exit.

$ venv/bin/ph-download-file-as --help
Usage: ph-download-file-as [OPTIONS] URL [LOCALFILE]

  Download link to local file

  - url: a string - localfile: a string

  --help  Show this message and exit.

$ venv/bin/ph-soup-explore --help
Usage: ph-soup-explore [OPTIONS] [URL_OR_FILE]

  Create a soup object from a url or file and explore with ipython

  --help  Show this message and exit.
In [1]: import parse_helper as ph

In [2]: ph.USER_AGENT
Out[2]: 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/58.0.3029.110 Chrome/58.0.3029.110 Safari/537.36'

In [3]: ph.google_serp('scaling redis')
[{'link': '',
  'title': 'Partitioning: how to split data among multiple Redis instances. – Redis'},
 {'link': '',
  'title': 'How Twitter Uses Redis to Scale - 105TB RAM ... - High Scalability'},
 {'link': '',
  'title': 'Scaling Redis Clusters with Replica Nodes - Amazon ElastiCache'},
 {'link': '',
  'title': 'Scaling Up Single-Node Redis Clusters - Amazon ElastiCache'},
 {'link': '',
  'title': 'Chapter 10: Scaling Redis - Redis Labs'},
 {'link': '',
  'title': 'Scaling Out Redis: Read-Only Slaves or Cluster? - Redis Labs'},
 {'link': '',
  'title': 'ten thousand hours • Scaling Redis'},
 {'link': '',
  'title': 'How scalable is Redis? - Quora'},
 {'link': '',
  'title': 'How Twitter Uses Redis To Scale - 105TB RAM, 39MM QPS ... - LinkedIn'},
 {'link': '',
  'title': 'How to Scale Azure Redis Cache | Microsoft Docs'}]

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Filename, size & hash SHA256 hash help File type Python version Upload date
parse_helper-0.1.17-py3-none-any.whl (13.0 kB) Copy SHA256 hash SHA256 Wheel py3 Jul 20, 2018

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN SignalFx SignalFx Supporter DigiCert DigiCert EV certificate StatusPage StatusPage Status page