Skip to main content

CLI tool for fetching paginated JSON from a URL

Project description


PyPI Changelog Tests License

CLI tool for retrieving JSON from paginated APIs.

This tool works against APIs that use the HTTP Link header for pagination. The GitHub API is one example of this.

Recipes using this tool:


pip install paginate-json

Or use pipx:

pipx install paginate-json


Run this tool against a URL that returns a JSON list of items and uses the link: HTTP header to indicate the URL of the next page of results.

It will output a single JSON list containing all of the records, across multiple pages.

paginate-json \

You can use the --header option to send additional request headers. For example, if you have a GitHub OAuth token you can pass it like this:

paginate-json \ \
  --header Authorization "bearer e94d9e404d86..."

Some APIs may return a root level object where the items you wish to gather are stored in a key, like this example from the Datasette JSON API:

  "ok": true,
  "rows": [
      "id": 1,
      "name": "San Francisco"
      "id": 2,
      "name": "Los Angeles"
      "id": 3,
      "name": "Detroit"
      "id": 4,
      "name": "Memnonia"

In this case, use --key rows to specify which key to extract the items from:

paginate-json \ \
  --key rows

The output JSON will be streamed as a pretty-printed JSON array by default.

To switch to newline-delimited JSON, with a separate object on each line, add --nl:

paginate-json \ \
  --key rows \

The output from that command looks like this:

{"id": 1, "name": "San Francisco"}
{"id": 2, "name": "Los Angeles"}
{"id": 3, "name": "Detroit"}
{"id": 4, "name": "Memnonia"}

Using this with sqlite-utils

This tool works well in conjunction with sqlite-utils. For example, here's how to load all of the GitHub issues for a project into a local SQLite database.

paginate-json \
  "" \
  --nl | \
  sqlite-utils upsert /tmp/issues.db issues - --nl --pk=id

You can then use other features of sqlite-utils to enhance the resulting database. For example, to enable full-text search on the issue title and body columns:

sqlite-utils enable-fts /tmp/issues.db issues title body

Using jq to transform each page

If you install the optional jq or pyjq dependency you can also pass --jq PROGRAM to transform the results of each page using a jq program. The jq option you supply should transform each page of fetched results into an array of objects.

For example, to extract the id and title from each issue:

paginate-json \
  "" \
  --nl \
  --jq 'map({id, title})'

paginate-json --help

Usage: paginate-json [OPTIONS] URL

  Fetch paginated JSON from a URL

  Example usage:


  --version                Show the version and exit.
  --nl                     Output newline-delimited JSON
  --key TEXT               Top-level key to extract from each page
  --jq TEXT                jq transformation to run on each page
  --accept TEXT            Accept header to send
  --sleep INTEGER          Seconds to delay between requests
  --silent                 Don't show progress on stderr - default
  -v, --verbose            Show progress on stderr
  --show-headers           Dump response headers out to stderr
  --ignore-http-errors     Keep going on non-200 HTTP status codes
  --header <TEXT TEXT>...  Send custom request headers
  --help                   Show this message and exit.

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

paginate-json-1.0.tar.gz (9.9 kB view hashes)

Uploaded Source

Built Distribution

paginate_json-1.0-py3-none-any.whl (9.8 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page