Skip to main content

Quickly and painlessly dump all your Airtable schemas & data to JSON.

Project description

backup-airtable

Export your Airtable data to JSON files. It exports both the table's schema and the records.

Installation

The easiest way to run this is using pipx:

pipx install backup-airtable

You can also use brew:

brew install xavdid/projects/backup-airtable

Usage

Once authenticated, running backup-airtable will immediately start downloading data. There are a few available options (viewable via backup-airtable --help):

Usage: backup-airtable [OPTIONS] [BACKUP_DIRECTORY]

  Save data from Airtable to a series of local JSON files / folders

Options:
  --version              Show the version and exit.
  --ignore-table TEXT    Table id(s) to ignore when backing up.
  --airtable-token TEXT  Airtable Access Token  [required]
  --include-comments     Whether to include row comments in the backup. May
                         slow down the backup considerably if many rows have
                         backups.
  --help                 Show this message and exit.

You'll likely only need ignore-table (which you can specify multiple times) to ignore specific tables from bases you otherwise want to include.

Examples

  • backup-airtable
  • backup-airtable --include-comments
  • backup-airtable some_backup_folder
  • backup_airtable --ignore-table tbl123 --ignore-table tbl456

Authentication

You need to create a personal access token to use this tool. It has the format pat123.456. They can be created at https://airtable.com/create/tokens.

Ensure it has the following scopes:

  • data.records:read
  • schema.bases:read
  • if you're going to export comments (see comments):
    • data.recordComments:read

You can give it access to as many or as few bases as you'd like. Everything the token has access to will be backed up.

Supplying the Key

You can make the key available in the environment as AIRTABLE_TOKEN or via the --airtable-token flag:

  • AIRTABLE_TOKEN=pat123.456 backup-airtable
  • backup-airtable --airtable-token pat123.456

Exported Data Format

This tool creates folders for each base, each containing records.json and schema.json:

. (backup_directory)
├── videogames/
│   ├── games/
│   │   ├── schema.json
│   │   └── records.json
│   └── playthroughs/
│       ├── schema.json
│       └── records.json
└── tv/
    ├── shows/
    │   ├── schema.json
    │   └── records.json
    ├── seasons/
    │   ├── schema.json
    │   └── records.json
    └── watches/
        ├── schema.json
        └── records.json

The contents of each file is the raw API response for the table's schema (which includes formula definitions):

{
  "fields": [
    {
      "id": "fldAReWzcSCy8lR6S",
      "name": "Name",
      "type": "singleLineText"
    },
    {
      "id": "fldapjPtWVGLeVEz6",
      "name": "Style",
      "options": {
        "choices": [
          {
            "color": "redLight2",
            "id": "selpGtES7bVHWFO68",
            "name": "Competitive"
          },
          {
            "color": "blueLight2",
            "id": "sel176WltZzGmNl3l",
            "name": "Cooperative"
          }
        ]
      },
      "type": "singleSelect"
    },
    {
      "id": "fldpMVjIrO1QjFeAy",
      "name": "Is Available?",
      "options": {
        "formula": "AND(IF({fldGC6t3qWTFCESvA}, {fldGC6t3qWTFCESvA}<={fld4hmOueoB5ah8Io}, 1), {fld4gls5vBed7NBOP} = 0)",
        "isValid": true,
        "referencedFieldIds": [
          "fldGC6t3qWTFCESvA",
          "fld4hmOueoB5ah8Io",
          "fld4gls5vBed7NBOP"
        ],
        "result": {
          "options": {
            "precision": 0
          },
          "type": "number"
        }
      },
      "type": "formula"
    }
  ],
  "id": "tblvcNVpUk07pRxUQ",
  "name": "Games",
  "primaryFieldId": "fldAReWzcSCy8lR6S",
  "views": [
    {
      "id": "viw2PrDfjQquMoTKb",
      "name": "Main View",
      "type": "grid"
    },
    {
      "id": "viweVcA0peE3M3zag",
      "name": "Add a New Game",
      "type": "form"
    }
  ]
}

and the records themselves:

[
  {
    "commentCount": 0,
    "createdTime": "2017-09-19T06:21:48.000Z",
    "fields": {
      "Name": "Libertalia: Winds of Galecrest",
      "Style": "Competitive",
      "Is Available?": 1
    },
    "id": "rec0wIiSnMutUfoTY"
  },
  {
    "commentCount": 0,
    "createdTime": "2023-09-19T06:20:20.000Z",
    "fields": {
      "Name": "Hanabi",
      "Style": "Cooperative",
      "Is Available?": 0
    },
    "id": "rec48RFqGw8hAmZFY"
  }
]

Comments

Each row in Airtable can have comments, but downloading them takes an extra API call per row. For bases with lots of rows with comments, this can dramatically slow down the backup.

Comments will be included, oldest to newest, on each row:

  {
    "commentCount": 1,
    "comments": [
      {
        "author": {
          "email": "email@website.com",
          "id": "usrOrn2etJhbw2dem",
          "name": "Bruce Wayne"
        },
        "createdTime": "2025-02-21T08:05:25.000Z",
        "id": "comx1KUhmPiHYX10w",
        "lastUpdatedTime": null,
        "text": "cool comment!"
      }
    ],
    "createdTime": "2021-05-24T04:19:13.000Z",
    "fields": {
      "Name": "Vantage",
      "Style": "Cooperative",
      "Is Available?": 0
    },
    "id": "recKPmZ4DkjYyFrV4"
  },

Differences from Upstream

This was originally forked from simonw/airtable-export and has since diverged. In the interest of simplicity & my own needs, I:

  • made backup_directory optional; it defaults to ./airtable-backup-<ISO_DATE>
  • removed ndjson, yaml, and sqlite options; it always outputs formatted JSON
  • removed base_id; it pulls every base the auth token has access to
  • removed user-agent option for simplicity (though would be open to re-adding it later, if needed). It makes calls as default of backup-airtable
  • removed schema option; it always dumps the schema
  • removed http-read-timeout; it defaults to a high-enough value of 60 seconds
  • it doesn't flatten the record. the top level keys are id, createdTime, and fields

Development

This project uses just for running tasks. First, create a virtualenv:

python -m venv .venv
source .venv/bin/activate

Then run just install to install the project and its development dependencies. At that point, the backup-airtable will be available. Run just to see all the available commands.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

backup_airtable-0.2.0.tar.gz (11.0 kB view details)

Uploaded Source

Built Distribution

backup_airtable-0.2.0-py3-none-any.whl (11.4 kB view details)

Uploaded Python 3

File details

Details for the file backup_airtable-0.2.0.tar.gz.

File metadata

  • Download URL: backup_airtable-0.2.0.tar.gz
  • Upload date:
  • Size: 11.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.12.8

File hashes

Hashes for backup_airtable-0.2.0.tar.gz
Algorithm Hash digest
SHA256 2bf3a3cbcc9da159546e450f9f5826d50d6433c0f2afc8e2346f19c584d97b8c
MD5 f09059b57974c47af131e3073e887d2d
BLAKE2b-256 c1fdf824ad11e89959762bb70e603406ce8f44bf6c3dfa1e66872910bcb65769

See more details on using hashes here.

File details

Details for the file backup_airtable-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for backup_airtable-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 63ea40e04bdbdda5b825c745a18e08889c8a3ebbce061a0e62b81c35a484f1c2
MD5 9f1ea3abefd607a411a39ade35023ee6
BLAKE2b-256 f7625ee8e2701942ff658f9d05eceff3a14f64f84e6a1864de182b0b24675e70

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page