Skip to main content

A library for ingesting data provided by paginated HTTP APIs

Project description

bezalel

A library for ingesting data provided by paginated HTTP APIs

Usage

Basic use case

If you have to pull data from HTTP API that has an endpoint accepting parameters:

pageNumber=1,2,...

And returning JSON:

{
    "pageCount": 5,
    "entities": [
      {"key":  "val1", ...},
      {"key":  "val2", ...},
      ...
    ]
}

Then you can iterate over all pages with following code:

import requests
from bezalel import PaginatedApiIterator


for page in PaginatedApiIterator(requests.Session(), url=f"http://localhost:5000/page-api",
                                     request_page_number_param_name="pageNumber",
                                     response_page_count_field_name="pageCount",
                                     response_records_field_name="entities"):
    print(f"Page: {page}")

It will print:

Page: [{"key":  "val1", ...}, {"key":  "val2", ...}, ...]
Page: [{"key":  "val100", ...}, {"key":  "val101", ...}, ...]
Page: [{"key":  "val200", ...}, {"key":  "val201", ...}, ...]
...

Grouping with BufferingIterator

If HTTP API doesn't allow you setting high number of records per page, use BufferingIterator.

import requests
from bezalel import PaginatedApiIterator, BufferingIterator


for page in BufferingIterator(PaginatedApiIterator(requests.Session(), url=f"http://localhost:5000/page-api",
                                                       request_page_number_param_name="pageNumber",
                                                       response_page_count_field_name="pageCount",
                                                       response_records_field_name="entities"), buffer_size=2):
    print(f"Page: {page}")

It will combine multiple pages into one array, so that

Page: [{"key":  "val1", ...}, {"key":  "val2", ...}, ..., {"key":  "val100", ...}, {"key":  "val101", ...}, ...]
Page: [{"key":  "val200", ...}, {"key":  "val201", ...}, ..., {"key":  "val300", ...}, {"key":  "val301", ...}, ...]
...

This is useful for fetching many records and storing them in fewer files (every file would be bigger).

Iterating over all records

TODO: this API will be improved in future release.

import itertools
import requests
from bezalel import PaginatedApiIterator


all_elems = list(itertools.chain.from_iterable(PaginatedApiIterator(requests.Session(), url=f"https://your/api",
                                                   request_page_number_param_name="pageNumber",
                                                   response_page_count_field_name="pageCount",
                                                   response_records_field_name="entities"))):
print(f"len={len(all_elems)}: {all_elems}")

will print

len=12300: [{"key":  "val1", ...}, {"key":  "val2", ...}, ...]

Helper function: normalize_with_prototype()

Normalize python dict, so that it has all the fields and only the fields specified in a prototype dict.

from bezalel import normalize_with_prototype

object_from_api = {
    "id": 123,
    "name:": "John",
    "country": "Poland",
    "customDict": {
        "some": 123,
        "complex": 345,
        "structure": 546
    },
    # city is not provided here (but present in prototype)
    "pets": [
        {"id": 101, "type": "dog", "name": "Barky"},
        {"id": 102, "type": "snail"},   # name is not provided here (but present in prototype)
    ],
    "unspecifiedField": 123     # this field is not present in prototype below
}

prototype_from_swagger = {
    "id": 0,
    "name:": "",
    "country": "",
    "customDict": {},
    "city": "",
    "pets": [
        {"id": 0, "type": "", "name": ""},
    ]
}

result = normalize_with_prototype(prototype_from_swagger, object_from_api, pass_through_paths=[".customDict"])
# pass_through_paths is optional and it marks an object as something that should not be normalized

would return

result = {
    "id": 123,
    "name:": "John",
    "country": "Poland",
    "customDict": {
        "some": 123,
        "complex": 345,
        "structure": 546
    },
    "city": None,   # city was added
    "pets": [
        {"id": 101, "type": "dog", "name": "Barky"},
        {"id": 102, "type": "snail", "name": None}, # name was added
    ]
}

Helper function: normalize_dicts()

Normalize list of nested python dicts to a list of one-level dicts.

Example:

from bezalel import normalize_dicts

data = [
    {
        "id": 1, "name": "John Smith",
        "pets": [
            {"id": 101, "type": "cat", "name": "Kitty", "toys": [{"name": "toy1"}, {"name": "toy2"}]},
            {"id": 102, "type": "dog", "name": "Barky", "toys": [{"name": "toy3"}]}
        ]
    },
    {
        "id": 2, "name": "Sue Smith",
        "pets": [
            {"id": 201, "type": "cat", "name": "Kitten", "toys": [{"name": "toy4"}, {"name": "toy5"}, {"name": "toy6"}]},
            {"id": 202, "type": "dog", "name": "Fury", "toys": []}
        ]
    },
]

normalize_dicts(data, ["pets", "toys"])

would return:

[{'id': 1, 'name': 'John Smith', 'pets.id': 101, 'pets.type': 'cat', 'pets.name': 'Kitty', 'pets.toys.name': 'toy1'},
 {'id': 1, 'name': 'John Smith', 'pets.id': 101, 'pets.type': 'cat', 'pets.name': 'Kitty', 'pets.toys.name': 'toy2'},
 {'id': 1, 'name': 'John Smith', 'pets.id': 102, 'pets.type': 'dog', 'pets.name': 'Barky', 'pets.toys.name': 'toy3'},
 {'id': 2, 'name': 'Sue Smith', 'pets.id': 201, 'pets.type': 'cat', 'pets.name': 'Kitten', 'pets.toys.name': 'toy4'},
 {'id': 2, 'name': 'Sue Smith', 'pets.id': 201, 'pets.type': 'cat', 'pets.name': 'Kitten', 'pets.toys.name': 'toy5'},
 {'id': 2, 'name': 'Sue Smith', 'pets.id': 201, 'pets.type': 'cat', 'pets.name': 'Kitten', 'pets.toys.name': 'toy6'},
 {'id': 2, 'name': 'Sue Smith', 'pets.id': 202, 'pets.type': 'dog', 'pets.name': 'Fury'}]

Presence of the last record can be controlled by flag return_incomplete_records. If return_incomplete_records=False then last record in the example would not have been returned.

Additional options:

  • jsonify_lists - when set to True, then if a list is encountered (not in main path), it is dumped as a JSON string.
  • jsonify_dicts - list of paths for where to expect a dict. That dict will be then dumped as a JSON string.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bezalel-0.0.20.tar.gz (13.9 kB view details)

Uploaded Source

Built Distribution

bezalel-0.0.20-py3-none-any.whl (11.7 kB view details)

Uploaded Python 3

File details

Details for the file bezalel-0.0.20.tar.gz.

File metadata

  • Download URL: bezalel-0.0.20.tar.gz
  • Upload date:
  • Size: 13.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.9.13

File hashes

Hashes for bezalel-0.0.20.tar.gz
Algorithm Hash digest
SHA256 bde9b65517900a2077d62460f803678c7a4f31b58cbc8b8001f215a45a130a70
MD5 72f1ba0b2d73764ae522f18569a2f369
BLAKE2b-256 ce84971977a874c4e1d83c97c2b0f07e21fbd2736414d5432a3ff995efb3d43b

See more details on using hashes here.

File details

Details for the file bezalel-0.0.20-py3-none-any.whl.

File metadata

  • Download URL: bezalel-0.0.20-py3-none-any.whl
  • Upload date:
  • Size: 11.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.9.13

File hashes

Hashes for bezalel-0.0.20-py3-none-any.whl
Algorithm Hash digest
SHA256 ae012c2618712f7f6a1678aea96b895e1df1ab01aa7455f42133a3596ef16049
MD5 4b48fd0006c21ab88b287439ec6c4627
BLAKE2b-256 a915aacf382e90fcb34a1c54a42d99a26a508076f1baa27a143408067732a904

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page