Skip to main content

Python library for retrieving comments and stories from HackerNews

Project description


Scrape [hacker news]( comments and posts
using the [Algolia API](


from hackernews-scraper import CommentScraper


The above will return a generator that will yield one comment at a time.
It will keep on going until there are no more comments to fetch, or until
it reaches the 50 pages limit set by hacker news. In the latter case, a
`TooManyItemsException` will be raised.

If the hacker news API response is missing any required fields, the scraper
will raise `KeyError`.

Response format

'author': u'dhmholley',
'comment_id': u'7531026',
'comment_text': u'Are people still blowing this whistle?...',
'created_at': u'2014-04-04T12:57:38.000Z',
'parent_id': 7530853,
'points': 1,
'story_id': None,
'story_title': None,
'story_url': None,
'timestamp': 1396616258,
'title': None,
'url': None

'author': u'sethco',
'created_at': u'2014-04-04T12:56:23.000Z',
'objectID': None,
'points': 1,
'story_text': 1,
'timestamp': 1396616183,
'title': u'Opower IPO today',
'url': u''


You need to have [httpretty](
and [factory-boy]( installed.

Run `nosetests` in the root folder or the `tests` folder.

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for hackernews_scraper, version 1.0.2
Filename, size File type Python version Upload date Hashes
Filename, size hackernews_scraper-1.0.2.tar.gz (5.4 kB) File type Source Python version None Upload date Hashes View hashes

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page