Amazon Photos API
Project description
Amazon Photos API
Table of Contents
Installation
pip install amazon-photos
It is recommended to use this API in a Jupyter Notebook, as the results from most endpoints are a DataFrame which can be neatly displayed in a notebook, and efficiently manipulated with vectorized operations. This becomes increasingly important when dealing with large quantities of data.
Setup
There are two ways to set up authentication. The first is to pass the cookies explicitly to the AmazonPhotos
constructor, the second is to add your cookies as environment variables.
Log in to Amazon Photos and copy the cookies:
- *
ubid-acbxx
- *
at-acbxx
session-id
*Replace xx
with your country code
Option 1: Cookies Dict
from amazon_photos import AmazonPhotos
ap = AmazonPhotos(
cookies={
"session-id": ...,
"ubid-acbca": ...,
"at-acbca": ...,
},
db_path="ap.parquet", # initialize a simple database to store results
)
# sanity check, verify authenticated endpoint can be reached
ap.usage()
Option 2: Environment Variables
E.g. for amazon.ca (Canada), you would add to your ~/.bashrc
:
export session_id="..."
export ubid_acbca="..."
export at_acbca="..."
Query Syntax
For valid location and people IDs, see the results from the
aggregations()
method.
Example query:
drive/v1/search
type:(PHOTOS OR VIDEOS)
AND things:(plant AND beach OR moon)
AND timeYear:(2019)
AND timeMonth:(7)
AND timeDay:(1)
AND location:(CAN#BC#Vancouver)
AND people:(CyChdySYdfj7DHsjdSHdy)
/drive/v1/nodes
kind:(FILE* OR FOLDER*)
AND contentProperties.contentType:(image* OR video*)
AND status:(AVAILABLE*)
AND settings.hidden:false
AND favorite:(true)
Examples
from amazon_photos import AmazonPhotos
## e.g. using cookies dict
ap = AmazonPhotos(cookies={
"at-acbca": ...,
"ubid-acbca": ...,
"session-id": ...,
})
## e.g. using env variables and specifying tld. E.g. amazon.ca (Canada)
# ap = AmazonPhotos(tld="ca")
# get current usage stats
ap.usage()
# get entire Amazon Photos library. (default save to `ap.parquet`)
nodes = ap.query("type:(PHOTOS OR VIDEOS)")
# query Amazon Photos library with more filters applied. (default save to `ap.parquet`)
nodes = ap.query("type:(PHOTOS OR VIDEOS) AND things:(plant AND beach OR moon) AND timeYear:(2023) AND timeMonth:(8) AND timeDay:(14) AND location:(CAN#BC#Vancouver)")
# sample first 10 nodes
node_ids = nodes.id[:10]
# move a batch of images/videos to the trash bin
ap.trash(node_ids)
# get trash bin contents
ap.trashed()
# permanently delete a batch of images/videos.
ap.delete(node_ids)
# restore a batch of images/videos from the trash bin
ap.restore(node_ids)
# upload media (preserves local directory structure and copies to Amazon Photos root directory)
ap.upload('path/to/files')
# download a batch of images/videos
ap.download(node_ids)
# convenience method to get photos only
ap.photos()
# convenience method to get videos only
ap.videos()
# get all identifiers calculated by Amazon.
ap.aggregations(category="all")
# get specific identifiers calculated by Amazon.
ap.aggregations(category="location")
Common Paramters
name | type | description |
---|---|---|
ContentType | str | "JSON" |
_ | int | 1690059771064 |
asset | str | "ALL" "MOBILE" "NONE "DESKTOP" default: "ALL" |
filters | str | "type:(PHOTOS OR VIDEOS) AND things:(plant AND beach OR moon) AND timeYear:(2019) AND timeMonth:(7) AND location:(CAN#BC#Vancouver) AND people:(CyChdySYdfj7DHsjdSHdy)" default: "type:(PHOTOS OR VIDEOS)" |
groupByForTime | str | "day" "month" "year" |
limit | int | 200 |
lowResThumbnail | str | "true" "false" default: "true" |
resourceVersion | str | "V2" |
searchContext | str | "customer" "all" "unknown" "family" "groups" default: "customer" |
sort | str | "['contentProperties.contentDate DESC']" "['contentProperties.contentDate ASC']" "['createdDate DESC']" "['createdDate ASC']" "['name DESC']" "['name ASC']" default: "['contentProperties.contentDate DESC']" |
tempLink | str | "false" "true" default: "false" |
Notes
https://www.amazon.ca/drive/v1/batchLink
- This endpoint is called when downloading a batch of photos/videos in the web interface. It then returns a URL to download a zip file, then makes a request to that url to download the content. When making a request to download data for 1200 nodes (max batch size), it turns out to be much slower (~2.5 minutes) than asynchronously downloading 1200 photos/videos individually (~1 minute).
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file amazon-photos-0.0.34.tar.gz
.
File metadata
- Download URL: amazon-photos-0.0.34.tar.gz
- Upload date:
- Size: 17.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 670ecc1d8aa2084daddbf732569f98135a75bf0c65584b27dade3d7ac83d5a2b |
|
MD5 | f7c070ba325201c768d47e944989964f |
|
BLAKE2b-256 | 9840a3223a98e39d19d1fe3b4c7a5285ec74b53596ca8dec605618cc41ab7c04 |
Provenance
File details
Details for the file amazon_photos-0.0.34-py3-none-any.whl
.
File metadata
- Download URL: amazon_photos-0.0.34-py3-none-any.whl
- Upload date:
- Size: 15.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 40e42593b953d73e4fd68fa2f81dd84599f0b3fec9116dccb971bc9b9ab20479 |
|
MD5 | 67b490d294c9817bd1e86d4f331569e5 |
|
BLAKE2b-256 | 1dce12c6333f9ed332f6a05763a3e8cd74a9ea6bbbb41e608d14070e650f5a08 |