Skip to main content

CLI tool for fetching paginated JSON from a URL

Project description

paginate-json

PyPI Changelog Tests License

CLI tool for retrieving JSON from paginated APIs.

This tool works against APIs that use the HTTP Link header for pagination. The GitHub API is one example of this.

Recipes using this tool:

Installation

pip install paginate-json

Or use pipx:

pipx install paginate-json

Usage

Run this tool against a URL that returns a JSON list of items and uses the link: HTTP header to indicate the URL of the next page of results.

It will output a single JSON list containing all of the records, across multiple pages.

paginate-json \
  https://api.github.com/users/simonw/events

You can use the --header option to send additional request headers. For example, if you have a GitHub OAuth token you can pass it like this:

paginate-json \
  https://api.github.com/users/simonw/events \
  --header Authorization "bearer e94d9e404d86..."

Some APIs may return a root level object where the items you wish to gather are stored in a key, like this example from the Datasette JSON API:

{
  "ok": true,
  "rows": [
    {
      "id": 1,
      "name": "San Francisco"
    },
    {
      "id": 2,
      "name": "Los Angeles"
    },
    {
      "id": 3,
      "name": "Detroit"
    },
    {
      "id": 4,
      "name": "Memnonia"
    }
  ]
}

In this case, use --key rows to specify which key to extract the items from:

paginate-json \
  https://latest.datasette.io/fixtures/facet_cities.json \
  --key rows

The output JSON will be streamed as a pretty-printed JSON array by default.

To switch to newline-delimited JSON, with a separate object on each line, add --nl:

paginate-json \
  https://latest.datasette.io/fixtures/facet_cities.json \
  --key rows \
  --nl

The output from that command looks like this:

{"id": 1, "name": "San Francisco"}
{"id": 2, "name": "Los Angeles"}
{"id": 3, "name": "Detroit"}
{"id": 4, "name": "Memnonia"}

Using this with sqlite-utils

This tool works well in conjunction with sqlite-utils. For example, here's how to load all of the GitHub issues for a project into a local SQLite database.

paginate-json \
  "https://api.github.com/repos/simonw/datasette/issues?state=all&filter=all" \
  --nl | \
  sqlite-utils upsert /tmp/issues.db issues - --nl --pk=id

You can then use other features of sqlite-utils to enhance the resulting database. For example, to enable full-text search on the issue title and body columns:

sqlite-utils enable-fts /tmp/issues.db issues title body

Using jq to transform each page

If you install the optional jq or pyjq dependency you can also pass --jq PROGRAM to transform the results of each page using a jq program. The jq option you supply should transform each page of fetched results into an array of objects.

For example, to extract the id and title from each issue:

paginate-json \
  "https://api.github.com/repos/simonw/datasette/issues" \
  --nl \
  --jq 'map({id, title})'

paginate-json --help

Usage: paginate-json [OPTIONS] URL

  Fetch paginated JSON from a URL

  Example usage:

      paginate-json https://api.github.com/repos/simonw/datasette/issues

Options:
  --version                Show the version and exit.
  --nl                     Output newline-delimited JSON
  --key TEXT               Top-level key to extract from each page
  --jq TEXT                jq transformation to run on each page
  --accept TEXT            Accept header to send
  --sleep INTEGER          Seconds to delay between requests
  --silent                 Don't show progress on stderr - default
  -v, --verbose            Show progress on stderr
  --show-headers           Dump response headers out to stderr
  --ignore-http-errors     Keep going on non-200 HTTP status codes
  --header <TEXT TEXT>...  Send custom request headers
  --help                   Show this message and exit.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

paginate-json-1.0.tar.gz (9.9 kB view details)

Uploaded Source

Built Distribution

paginate_json-1.0-py3-none-any.whl (9.8 kB view details)

Uploaded Python 3

File details

Details for the file paginate-json-1.0.tar.gz.

File metadata

  • Download URL: paginate-json-1.0.tar.gz
  • Upload date:
  • Size: 9.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.4

File hashes

Hashes for paginate-json-1.0.tar.gz
Algorithm Hash digest
SHA256 689d3599b38a325c6f4ae76c773b8c79a6f5f324305ca6a88ed1bf72cebc04f3
MD5 7925e492324ac547ce032a8766e27621
BLAKE2b-256 03afc888fe62794fd285b1ce3d29131d14358dc8da568afdfd8a09bd4c976b18

See more details on using hashes here.

File details

Details for the file paginate_json-1.0-py3-none-any.whl.

File metadata

  • Download URL: paginate_json-1.0-py3-none-any.whl
  • Upload date:
  • Size: 9.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.4

File hashes

Hashes for paginate_json-1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 7ebcc109bf56865d24fe4fc3b015551466abe4396aa9713faddc92b7290bacbc
MD5 c776529075d2e473ad99bffed2560048
BLAKE2b-256 e623497a4d248f2409ce32c623f828da0638746453ab5044dc8bbc1bc1a27d29

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page