Skip to main content

a module for polling urls and stats from homepages

Project description

travis-img pageone ====== a module for polling urls and stats from homepages

Install

pip install pageone

Test

Requires nose

nosetests

Usage

pageone does two things: extract article urls from a site’s homepage and also uses selenium and phantomjs to find the relative positions of these urls.

To get stats about the positions of links, use link_stats:

from pageone import PageOne

p = PageOne(url='http://www.propublica.org/')

# get stats about links positions
for link in p.link_stats():
    print link

This will return a list of dictionaries that look like this:

{
 'bucket': 4,
 'datetime': datetime.datetime(2014, 6, 7, 16, 6, 3, 533818),
 'font_size': 13,
 'has_img': 1,
 'headline': u'',
 'homepage': 'http://www.propublica.org/',
 'img_area': 3969,
 'img_height': 63,
 'img_src': u'http://www.propublica.org/images/ngen/gypsy_image_medium/mpmh_victory_drive_140x140_130514_1.jpg',
 'img_width': 63,
 'url': u'http://www.propublica.org/article/protect-service-members-defense-department-plans-broad-ban-high-cost-loans',
 'x': 61,
 'x_bucket': 1,
 'y': 730,
 'y_bucket': 4
}

Here bucket variables represent where a link falls in 200x200 pixel grid. For x_bucket this number moves from left-to-right. For y_bucket, it moves top-to-bottom. bucket moves from top-left to bottom right. You can customize the size of this grid by passing in bucket_pixels to link_stats, eg:

from pageone import PageOne

p = PageOne(url='http://www.propublica.org/')

# get stats about links positions
for link in p.link_stats(bucket_pixels = 100):
    print link

To get simply get all of the article urls on a homepage, use articles:

from pageone import PageOne
p = PageOne(url='http://www.propublica.org/')

for article in p.articles():
  print article

If you want to get article urls from other sites, use incl_external:

from pageone import PageOne
p = PageOne(url='http://www.propublica.org/')

for article in p.articles(incl_external=True):
  print article

How do I know which urls are articles?

pageone uses siegfried for url parsing and validation. If you want to apply a custom regex for article url validation, you can pass in a pattern to either link_stats or articles, eg:

from pageone import PageOne
import re

pattern = re.compile(r'.*propublica.org/[a-z]+/[a-z0-9/-]+')

p = PageOne(url='http://www.propublica.org/')

for article in p.articles(pattern=pattern):
  print article

PhantomJS

pageone requires phantomjs to run link_stats. pageone defaults to looking for phantomjs in /usr/bin/local/phantomjs, but if you want to specify another path, initilize PageOne with phantom_path:

from pageone import PageOne

p = PageOne(url='http://www.propublica.org/', phantom_path="/usr/bin/phantomjs")
for link in p.link_stats():
    print link

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pageone-0.1.4.tar.gz (5.3 kB view details)

Uploaded Source

Built Distribution

pageone-0.1.4.macosx-10.9-intel.exe (71.6 kB view details)

Uploaded Source

File details

Details for the file pageone-0.1.4.tar.gz.

File metadata

  • Download URL: pageone-0.1.4.tar.gz
  • Upload date:
  • Size: 5.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for pageone-0.1.4.tar.gz
Algorithm Hash digest
SHA256 9a504d35f1e91ac183287af98c330a8d746e163370f0563002167d71923e8533
MD5 1ffef2be795df5eaa9a576e5cee53691
BLAKE2b-256 b7969e8d573c2cf33e6e3e8319eeae767a17f0c852634ec5a9fbd7c6ab6fd210

See more details on using hashes here.

File details

Details for the file pageone-0.1.4.macosx-10.9-intel.exe.

File metadata

File hashes

Hashes for pageone-0.1.4.macosx-10.9-intel.exe
Algorithm Hash digest
SHA256 572eefc5013c8894aa7423c8216145a33180d214379be2191f0aaa9bf9b2e9b5
MD5 b8f8394a97a6cdfa85912f969e821024
BLAKE2b-256 b24109c3980d6552faa12f2d92e478d0959c050fd16778e80d80245c99ac6019

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page