Skip to main content

a module for polling urls and stats from homepages

Project description

travis-img pageone ====== a module for polling urls and stats from homepages

Install

pip install pageone

Test

Requires nose

nosetests

Usage

pageone does two things: extract article urls from a site’s homepage and also uses selenium and phantomjs to find the relative positions of these urls.

To get stats about the positions of links, use link_stats:

from pageone import PageOne

p = PageOne(url='http://www.propublica.org/')

# get stats about links positions
for link in p.link_stats():
    print link

This will return a list of dictionaries that look like this:

{
 'bucket': 4,
 'datetime': datetime.datetime(2014, 6, 7, 16, 6, 3, 533818),
 'font_size': 13,
 'has_img': 1,
 'headline': u'',
 'homepage': 'http://www.propublica.org/',
 'img_area': 3969,
 'img_height': 63,
 'img_src': u'http://www.propublica.org/images/ngen/gypsy_image_medium/mpmh_victory_drive_140x140_130514_1.jpg',
 'img_width': 63,
 'url': u'http://www.propublica.org/article/protect-service-members-defense-department-plans-broad-ban-high-cost-loans',
 'x': 61,
 'x_bucket': 1,
 'y': 730,
 'y_bucket': 4
}

Here bucket variables represent where a link falls in 200x200 pixel grid. For x_bucket this number moves from left-to-right. For y_bucket, it moves top-to-bottom. bucket moves from top-left to bottom right. You can customize the size of this grid by passing in bucket_pixels to link_stats, eg:

from pageone import PageOne

p = PageOne(url='http://www.propublica.org/')

# get stats about links positions
for link in p.link_stats(bucket_pixels = 100):
    print link

To get simply get all of the article urls on a homepage, use articles:

from pageone import PageOne
p = PageOne(url='http://www.propublica.org/')

for article in p.articles():
  print article

If you want to get article urls from other sites, use incl_external:

from pageone import PageOne
p = PageOne(url='http://www.propublica.org/')

for article in p.articles(incl_external=True):
  print article

How do I know which urls are articles?

pageone uses siegfried for url parsing and validation. If you want to apply a custom regex for article url validation, you can pass in a pattern to either link_stats or articles, eg:

from pageone import PageOne
import re

pattern = re.compile(r'.*propublica.org/[a-z]+/[a-z0-9/-]+')

p = PageOne(url='http://www.propublica.org/')

for article in p.articles(pattern=pattern):
  print article

PhantomJS

pagone requires phantomjs to run link_stats. pagone defaults to looking for phantomjs in /usr/bin/local/phantomjs, but if you want to specify another path, initilize PageOne with phantom_path:

from pageone import PageOne

p = PageOne(url='http://www.propublica.org/', phantom_path="/usr/bin/phantomjs")
for link in p.link_stats():
    print link

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pageone-0.1.3.tar.gz (5.3 kB view details)

Uploaded Source

Built Distribution

pageone-0.1.3.macosx-10.9-intel.exe (71.6 kB view details)

Uploaded Source

File details

Details for the file pageone-0.1.3.tar.gz.

File metadata

  • Download URL: pageone-0.1.3.tar.gz
  • Upload date:
  • Size: 5.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for pageone-0.1.3.tar.gz
Algorithm Hash digest
SHA256 ef18cfeb6d7b3544eb99e185c14a56646b3231b9e8d80fbf6f7ed6066717e113
MD5 ffd0b9f5eb33138e2f2a61c101cc7d50
BLAKE2b-256 4d095ebbc2a9c8bd62b531429cd9ee2b0adc2f113ef7108c5c5ec4a181e51ddb

See more details on using hashes here.

File details

Details for the file pageone-0.1.3.macosx-10.9-intel.exe.

File metadata

File hashes

Hashes for pageone-0.1.3.macosx-10.9-intel.exe
Algorithm Hash digest
SHA256 368abc6faadf8998523631689baddfed617e4aa33c7c37dbdaec09d962435748
MD5 2b8f9ccad3cd861b73193d1c49e6478c
BLAKE2b-256 ca758ed6007b73d54c638a8b737dd7da5696d401d442838c5711a7b3d378702c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page