Skip to main content

Provides content not accessible through the standard Amazon API

Project description

A Hybrid Web scraper / API client. Supplements the standard Amazon API with web scraping functionality to get extra data. Specifically, product reviews.

Uses the Amazon Simple Product API to provide API accessible data. API search functions are imported directly into the amazon_scraper module.

Parameters are kept the same are in the same style as the underlying API, which in turn uses Bottlenose style parameters. Hence the non-Pythonic parameter names (ItemId).

The AmazonScraper constructor will pass ‘kwargs’ to Bottlenose (via Amazon Simple Product API). Bottlenose supports AWS regions, queries per second limiting, query caching and other nice features. Please view Bottlenose’ API for more information on this.

The latest version of python-amazon-simple-product-api (1.5.0 at time of writing), doesn’t support these arguemnts, only Region. If you require these, please use the latest code from their repository with the following command:

pip install git+


Amazon continually try and keep scrapers from working, they do this by:

  • A/B testing (randomly receive different HTML).
  • Huge numbers of HTML layouts for the same product categories.
  • Changing HTML layouts.
  • Moving content inside iFrames.

Amazon have resorted to moving more and more content into iFrames which this scraper can’t handle. I envisage a time where most data will be inaccessible without more complex logic.

I’ve spent a long time trying to get these scrapers working and it’s a never ending battle. I don’t have the time to continually keep up the pace with Amazon. If you are interested in improving Amazon Scraper, please let me know (creating an issue is fine). Any help is appreciated.


pip install amazon_scraper


All Products All The Time

Create an API instance:

>>> from amazon_scraper import AmazonScraper
>>> amzn = AmazonScraper("put your access key", "secret key", "and associate tag here")

The creation function accepts ‘kwargs’ which are passed to ‘bottlenose.Amazon’ constructor:

>>> from amazon_scraper import AmazonScraper
>>> amzn = AmazonScraper("put your access key", "secret key", "and associate tag here", Region='UK', MaxQPS=0.9, Timeout=5.0)


>>> import itertools
>>> for p in itertools.islice('python', SearchIndex='Books'), 5):
>>>     print p.title
Learning Python, 5th Edition
Python Programming: An Introduction to Computer Science 2nd Edition
Python In A Day: Learn The Basics, Learn It Quick, Start Coding Fast (In A Day Books) (Volume 1)
Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IPython
Python Cookbook

Lookup by ASIN/ItemId:

>>> p = amzn.lookup(ItemId='B00FLIJJSA')
>>> p.title
Kindle, Wi-Fi, 6" E Ink Display - for international shipment
>>> p.url

Batch Lookups:

>>> for p in amzn.lookup(ItemId='B0051QVF7A,B007HCCNJU,B00BTI6HBS'):
>>>     print p.title
Kindle, Wi-Fi, 6" E Ink Display - for international shipment
Kindle, 6" E Ink Display, Wi-Fi - Includes Special Offers (Black)
Kindle Paperwhite 3G, 6" High Resolution Display with Next-Gen Built-in Light, Free 3G + Wi-Fi - Includes Special Offers


>>> p = amzn.lookup(URL='')
>>> p.title
Kindle, Wi-Fi, 6" E Ink Display - for international shipment
>>> p.asin

Product Ratings:

>>> p = amzn.lookup(ItemId='B00FLIJJSA')
>>> p.ratings
[8, 4, 6, 4, 13]

Alternative Bindings:

>>> p = amzn.lookup(ItemId='B000GRFTPS')
>>> p.alternatives
['B00IVM5X7E', '9163192993', '0899669433', 'B00IPXPQ9O', '1482998742', '0441444814', '1497344824']
>>> for asin in p.alternatives:
>>>     alt = amzn.lookup(ItemId=asin)
>>>     print alt.title, alt.binding
The King in Yellow Kindle Edition
The King in Yellow Unknown Binding
King in Yellow Hardcover
The Yellow Sign Audible Audio Edition
The King in Yellow MP3 CD
THE KING IN YELLOW Mass Market Paperback
The King in Yellow Paperback

Supplemental text not available via the API:

>>> p = amzn.lookup(ItemId='0441016685')
>>> p.supplemental_text
[u"Bob Howard is a computer-hacker desk jockey ... ", u"Lovecraft\'s Cthulhu meets Len Deighton\'s spies ... ", u"This dark, funny blend of SF and ... "]

Review API

View lists of reviews:

>>> p = amzn.lookup(ItemId='B0051QVF7A')
>>> rs =
>>> rs.asin
>>> rs.ids
>>> rs.url

Quickly get a list of all reviews on a review page using the all_reviews property:

>>> p = amzn.lookup(ItemId='B0051QVF7A')
>>> rs =
>>> all_reviews_on_page = rs.all_reviews
>>> len(all_reviews_on_page)
>>> all_reviews_on_page[0].to_dict()["title"]
'Fantastic device - pick your Kindle!'

By ASIN/ItemId:

>>> rs ='B0051QVF7A')
>>> rs.asin
>>> rs.ids

For individual reviews use the review method. As a note this method is NOT suggested for use in bulk collection of reviews. Use all_reviews instead.:

>>> r =[0])
>>> r.asin
>>> r.url
2011-09-29 18:27:14+00:00
>>> r.text
Having been a little overwhelmed by the choices between all the new Kindles ... <snip>


>>> r ='')

Reviewer API

This package also supports getting information about specific reviewers and the reviews they have written over time. It is advisable to first look up a reviewer via another one of the products they have reviewed though. This situation will be improved in the future though.

Get reviews that a single reviewer has created:

r ="R3MF0NIRI3BT1E")
reviewer = self.amzn.reviewer(r.author_reviews_url)
all_reviews = reviewer.all_reviews

Iterate to the authors next review page if they have one:

r ="R3MF0NIRI3BT1E")
reviewer = self.amzn.reviewer(r.author_reviews_url)
reviewer = self.amzn.reviewer(reviewer.next_page_url)
second_page_reviews = reviewer.all_reviews


Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for amazon_scraper, version 0.2
Filename, size File type Python version Upload date Hashes
Filename, size amazon_scraper-0.2.tar.gz (12.8 kB) File type Source Python version None Upload date Hashes View

Supported by

AWS AWS Cloud computing Datadog Datadog Monitoring Facebook / Instagram Facebook / Instagram PSF Sponsor Fastly Fastly CDN Google Google Object Storage and Download Analytics Huawei Huawei PSF Sponsor Microsoft Microsoft PSF Sponsor NVIDIA NVIDIA PSF Sponsor Pingdom Pingdom Monitoring Salesforce Salesforce PSF Sponsor Sentry Sentry Error logging StatusPage StatusPage Status page