Skip to main content

Amazon Photos API

Project description

Amazon Photos API

Table of Contents

It is recommended to use this API in a Jupyter Notebook, as the results from most endpoints are a DataFrame which can be neatly displayed in a notebook, and efficiently manipulated with vectorized operations. This becomes increasingly important when dealing with large quantities of data.

Installation

pip install amazon-photos

Setup

There are two ways to set up authentication. The first is to pass the cookies explicitly to the AmazonPhotos constructor, the second is to add your cookies as environment variables.

Log in to Amazon Photos and copy the cookies:

  • *ubid-acbxx
  • *at-acbxx
  • session-id

*Replace xx with your country code

Option 1: Cookies Dict

from amazon_photos import AmazonPhotos

ap = AmazonPhotos(
    cookies={
        'ubid-acbca':...,
        'at-acbca': ...,
        'session-id': ...,
    },
    # optionally cache directory tree 
    cache_path='ap.cache',
    use_cache=True,
    # e.g. pandas options 
    dtype_backend='pyarrow',
    engine='pyarrow',
)

# sanity check, verify authenticated endpoint can be reached
ap.usage()

Option 2: Environment Variables

E.g. for amazon.ca (Canada), you would add to your ~/.bashrc:

export session_id="..."
export ubid_acbca="..."
export at_acbca="..."

Query Syntax

For valid location and people IDs, see the results from the aggregations() method.

Example query:

drive/v1/search

type:(PHOTOS OR VIDEOS)
AND things:(plant AND beach OR moon)
AND timeYear:(2019)
AND timeMonth:(7)
AND timeDay:(1)
AND location:(CAN#BC#Vancouver)
AND people:(CyChdySYdfj7DHsjdSHdy)

/drive/v1/nodes

kind:(FILE* OR FOLDER*)
AND contentProperties.contentType:(image* OR video*)
AND status:(AVAILABLE*)
AND settings.hidden:false
AND favorite:(true)

Examples

A database named ap.parquet will be created during the initial setup. This is mainly used to reduce 409 errors (upload conflicts) by checking your local file(s) md5 against the database before sending the request.

from amazon_photos import AmazonPhotos

## e.g. using cookies dict
ap = AmazonPhotos(
    cookies={
        'ubid-acbca':...,
        'at-acbca': ...,
        'session-id': ...,
    },
    # optionally cache directory tree 
    cache_path='ap.cache',
    use_cache=True,
)

## e.g. using env variables and specifying tld. E.g. amazon.ca (Canada)
# ap = AmazonPhotos(tld="ca")

# get current usage stats
ap.usage()

# get entire Amazon Photos library. (default save to `ap.parquet`)
nodes = ap.query("type:(PHOTOS OR VIDEOS)")

# query Amazon Photos library with more filters applied. (default save to `ap.parquet`)
nodes = ap.query("type:(PHOTOS OR VIDEOS) AND things:(plant AND beach OR moon) AND timeYear:(2023) AND timeMonth:(8) AND timeDay:(14) AND location:(CAN#BC#Vancouver)")

# sample first 10 nodes
node_ids = nodes.id[:10]

# move a batch of images/videos to the trash bin
ap.trash(node_ids)

# get trash bin contents
ap.trashed()

# permanently delete a batch of images/videos.
ap.delete(node_ids)

# restore a batch of images/videos from the trash bin
ap.restore(node_ids)

# upload media (preserves local directory structure and copies to Amazon Photos root directory)
ap.upload('path/to/files')

# download a batch of images/videos
ap.download(node_ids)

# convenience method to get photos only
ap.photos()

# convenience method to get videos only
ap.videos()

# get all identifiers calculated by Amazon.
ap.aggregations(category="all")

# get specific identifiers calculated by Amazon.
ap.aggregations(category="location")

Common Paramters

name type description
ContentType str "JSON"
_ int 1690059771064
asset str "ALL"
"MOBILE"
"NONE
"DESKTOP"

default: "ALL"
filters str "type:(PHOTOS OR VIDEOS) AND things:(plant AND beach OR moon) AND timeYear:(2019) AND timeMonth:(7) AND location:(CAN#BC#Vancouver) AND people:(CyChdySYdfj7DHsjdSHdy)"

default: "type:(PHOTOS OR VIDEOS)"
groupByForTime str "day"
"month"
"year"
limit int 200
lowResThumbnail str "true"
"false"

default: "true"
resourceVersion str "V2"
searchContext str "customer"
"all"
"unknown"
"family"
"groups"

default: "customer"
sort str "['contentProperties.contentDate DESC']"
"['contentProperties.contentDate ASC']"
"['createdDate DESC']"
"['createdDate ASC']"
"['name DESC']"
"['name ASC']"

default: "['contentProperties.contentDate DESC']"
tempLink str "false"
"true"

default: "false"

Notes

https://www.amazon.ca/drive/v1/batchLink

  • This endpoint is called when downloading a batch of photos/videos in the web interface. It then returns a URL to download a zip file, then makes a request to that url to download the content. When making a request to download data for 1200 nodes (max batch size), it turns out to be much slower (~2.5 minutes) than asynchronously downloading 1200 photos/videos individually (~1 minute).

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

amazon-photos-0.0.42.tar.gz (17.1 kB view details)

Uploaded Source

Built Distribution

amazon_photos-0.0.42-py3-none-any.whl (15.8 kB view details)

Uploaded Python 3

File details

Details for the file amazon-photos-0.0.42.tar.gz.

File metadata

  • Download URL: amazon-photos-0.0.42.tar.gz
  • Upload date:
  • Size: 17.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for amazon-photos-0.0.42.tar.gz
Algorithm Hash digest
SHA256 c4b9d509926f79b947b465e76744cae6d3b44088da2c0d8737e4e88bb45ca9c3
MD5 157fb7276f9eba1f3356e435717e743e
BLAKE2b-256 579e39efb0d43c14e41ddc294bf38b97e0987445bb5d0c796119b76f89d4b9e4

See more details on using hashes here.

Provenance

File details

Details for the file amazon_photos-0.0.42-py3-none-any.whl.

File metadata

File hashes

Hashes for amazon_photos-0.0.42-py3-none-any.whl
Algorithm Hash digest
SHA256 bd1905dd967b1381cd8f3d5911a912e4650e1030fa8ee25285e996579cff83ac
MD5 2aa622d66cdeed23084bbabf7d9e857e
BLAKE2b-256 077cabba667750437b5ca9d5e42b3140f505928bb5f0bc85549cd6d5f1a14e41

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page