Skip to main content

Amazon Photos API

Project description

Amazon Photos API

Table of Contents

Installation

pip install amazon-photos

It is recommended to use this API in a Jupyter Notebook, as the results from most endpoints are a DataFrame which can be neatly displayed in a notebook, and efficiently manipulated with vectorized operations. This becomes increasingly important when dealing with large quantities of data.

Setup

There are two ways to set up authentication. The first is to pass the cookies explicitly to the AmazonPhotos constructor, the second is to add your cookies as environment variables.

Log in to Amazon Photos and copy the cookies:

  • *ubid-acbxx
  • *at-acbxx
  • session-id

*Replace xx with your country code

Option 1: Cookies Dict

from amazon_photos import AmazonPhotos

ap = AmazonPhotos(
    cookies={
        "session-id": ...,
        "ubid-acbca": ...,
        "at-acbca": ...,
    },
    db_path="ap.parquet",  # initialize a simple database to store results
)

# sanity check, verify authenticated endpoint can be reached
ap.usage()

Option 2: Environment Variables

E.g. for amazon.ca (Canada), you would add to your ~/.bashrc:

export session_id="..."
export ubid_acbca="..."
export at_acbca="..."

Query Syntax

For valid location and people IDs, see the results from the aggregations() method.

Example query:

drive/v1/search

type:(PHOTOS OR VIDEOS)
AND things:(plant AND beach OR moon)
AND timeYear:(2019)
AND timeMonth:(7)
AND timeDay:(1)
AND location:(CAN#BC#Vancouver)
AND people:(CyChdySYdfj7DHsjdSHdy)

/drive/v1/nodes

kind:(FILE* OR FOLDER*)
AND contentProperties.contentType:(image* OR video*)
AND status:(AVAILABLE*)
AND settings.hidden:false
AND favorite:(true)

Examples

from amazon_photos import AmazonPhotos

## e.g. using cookies dict
ap = AmazonPhotos(cookies={
    "at-acbca": ...,
    "ubid-acbca": ...,
    "session-id": ...,
})

## e.g. using env variables and specifying tld. E.g. amazon.ca (Canada)
# ap = AmazonPhotos(tld="ca")

# get current usage stats
ap.usage()

# get entire Amazon Photos library. (default save to `ap.parquet`)
nodes = ap.query("type:(PHOTOS OR VIDEOS)")

# query Amazon Photos library with more filters applied. (default save to `ap.parquet`)
nodes = ap.query("type:(PHOTOS OR VIDEOS) AND things:(plant AND beach OR moon) AND timeYear:(2023) AND timeMonth:(8) AND timeDay:(14) AND location:(CAN#BC#Vancouver)")

# sample first 10 nodes
node_ids = nodes.id[:10]

# move a batch of images/videos to the trash bin
ap.trash(node_ids)

# get trash bin contents
ap.trashed()

# permanently delete a batch of images/videos.
ap.delete(node_ids)

# restore a batch of images/videos from the trash bin
ap.restore(node_ids)

# upload media (preserves local directory structure and copies to Amazon Photos root directory)
ap.upload('path/to/files')

# download a batch of images/videos
ap.download(node_ids)

# convenience method to get photos only
ap.photos()

# convenience method to get videos only
ap.videos()

# get all identifiers calculated by Amazon.
ap.aggregations(category="all")

# get specific identifiers calculated by Amazon.
ap.aggregations(category="location")

Common Paramters

name type description
ContentType str "JSON"
_ int 1690059771064
asset str "ALL"
"MOBILE"
"NONE
"DESKTOP"

default: "ALL"
filters str "type:(PHOTOS OR VIDEOS) AND things:(plant AND beach OR moon) AND timeYear:(2019) AND timeMonth:(7) AND location:(CAN#BC#Vancouver) AND people:(CyChdySYdfj7DHsjdSHdy)"

default: "type:(PHOTOS OR VIDEOS)"
groupByForTime str "day"
"month"
"year"
limit int 200
lowResThumbnail str "true"
"false"

default: "true"
resourceVersion str "V2"
searchContext str "customer"
"all"
"unknown"
"family"
"groups"

default: "customer"
sort str "['contentProperties.contentDate DESC']"
"['contentProperties.contentDate ASC']"
"['createdDate DESC']"
"['createdDate ASC']"
"['name DESC']"
"['name ASC']"

default: "['contentProperties.contentDate DESC']"
tempLink str "false"
"true"

default: "false"

Notes

https://www.amazon.ca/drive/v1/batchLink

  • This endpoint is called when downloading a batch of photos/videos in the web interface. It then returns a URL to download a zip file, then makes a request to that url to download the content. When making a request to download data for 1200 nodes (max batch size), it turns out to be much slower (~2.5 minutes) than asynchronously downloading 1200 photos/videos individually (~1 minute).

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

amazon-photos-0.0.35.tar.gz (16.9 kB view details)

Uploaded Source

Built Distribution

amazon_photos-0.0.35-py3-none-any.whl (15.0 kB view details)

Uploaded Python 3

File details

Details for the file amazon-photos-0.0.35.tar.gz.

File metadata

  • Download URL: amazon-photos-0.0.35.tar.gz
  • Upload date:
  • Size: 16.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for amazon-photos-0.0.35.tar.gz
Algorithm Hash digest
SHA256 f550d1d4eb1ea3ced9615a70b15bda4444f170f762a066db112f5e6b3b404a9c
MD5 73da21a2379cd4aaa66b714877bfe1ff
BLAKE2b-256 f96ea10d8c3085cf625b5fba6ca0f461e23468ac3173e2ce75b4d5ba4d66ed81

See more details on using hashes here.

Provenance

File details

Details for the file amazon_photos-0.0.35-py3-none-any.whl.

File metadata

File hashes

Hashes for amazon_photos-0.0.35-py3-none-any.whl
Algorithm Hash digest
SHA256 1c34df3aeefdd68280ff4d171d179dbbe3056bc9eed582c5357a1873c519edd9
MD5 ee3a6ce527c773c84ea28cb6d0d5e90d
BLAKE2b-256 4bef2f96ab9d716bb8b10a2fd8b4f84f8e3adfa6d400ca9ce0fd3e87aed1444c

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page