Skip to main content

Tools to work with Amsterdam Schema.

Project description

amsterdam-schema-tools

Set of libraries and tools to work with Amsterdam schema.

Install the package with: pip install amsterdam-schema-tools. This installs the library and a command-line tool called schema, with various subcommands. A listing can be obtained from schema --help.

Subcommands that talk to a PostgreSQL database expect either a DATABASE_URL environment variable or a command line option --db-url with a DSN.

Many subcommands want to know where to find schema files. Most will look in a directory of schemas denoted by the SCHEMA_URL environment variable or the --schema-url command line option. E.g.,

schema create tables --schema-url=myschemas mydataset

will try to load the schema for mydataset from myschemas/mydataset/dataset.json.

Generate amsterdam schema from existing database tables

The --prefix argument controls whether table prefixes are removed in the schema, because that is required for Django models.

As example we can generate a BAG schema. Point DATABASE_URL to bag_v11 database and then run :

schema show tablenames | sort | awk '/^bag_/{print}' | xargs schema introspect db bag --prefix bag_ | jq

The jq formats it nicely and it can be redirected to the correct directory in the schemas repository directly.

Express amsterdam schema information in relational tables

Amsterdam schema is expressed as jsonschema. However, to make it easier for people with a more relational mind- or toolset it is possible to express amsterdam schema as a set of relational tables. These tables are meta_dataset, meta_table and meta_field.

It is possible to convert a jsonschema into the relational table structure and vice-versa.

This command converts a dataset from an existing dataset in jsonschema format:

schema import schema <id of dataset>

To convert from relational tables back to jsonschema:

schema show schema <id of dataset>

Generating amsterdam schema from existing GeoJSON files

The following command can be used to inspect and import the GeoJSON files:

schema introspect geojson <dataset-id> *.geojson > schema.json
edit schema.json  # fine-tune the table names
schema import geojson schema.json <table1> file1.geojson
schema import geojson schema.json <table2> file2.geojson

Importing GOB events

The schematools library has a module that reads GOB events into database tables that are defines by an Amsterdam schema. This module can be used to read GOB events from a Kafka stream. It is also possible to read GOB events from a batch file with line-separeted events using:

schema import events <path-to-dataset> <path-to-file-with-events>

Export datasets

Datasets can be exported to different file formats. Currently supported are geopackage, csv and jsonlines. The command for exporting the dataset tables is:

schema export [geopackage|csv|jsonlines] <id of dataset>

The command has several command-line options that can be used. Documentations about these flags can be shown using the --help options.

Schema Tools as a pre-commit hook

Included in the project is a pre-commit hook that can validate schema files in a project such as amsterdam-schema

To configure it extend the .pre-commit-config.yaml in the project with the schema file defintions as follows:

  - repo: https://github.com/Amsterdam/schema-tools
    rev: v3.5.0
    hooks:
      - id: validate-schema
        args: ['https://schemas.data.amsterdam.nl/schema@v1.2.0#']
        exclude: |
            (?x)^(
                schema.+|             # exclude meta schemas
                datasets/index.json
            )$

args is a one element list containing the URL to the Amsterdam Meta Schema.

validate-schema will only process json files. However not all json files are Amsterdam schema files. To exclude files or directories use exclude with pattern.

pre-commit depends on properly tagged revisions of its hooks. Hence, we should not only bump version numbers on updates to this package, but also commit a tag with the version number; see below.

Doing a release

(This is for schema-tools developers.)

We use GitHub pull requests. If your PR should produce a new release of schema-tools, make sure one of the commit increments the version number in setup.cfg appropriately. Then,

  • merge the commit in GitHub, after review;
  • pull the code from GitHub and merge it into the master branch, git checkout master && git fetch origin && git merge --ff-only origin/master;
  • tag the release X.Y.Z with git tag -a vX.Y.Z -m "Bump to vX.Y.Z";
  • push the tag to GitHub with git push origin --tags;
  • release to PyPI: make upload (requires the PyPI secret).

Mocking data

The schematools library contains two Django management commands to generate mock data. The first one is create_mock_data which generates mock data for all the datasets that are found at the configured schema location SCHEMA_URL (where SCHEMA_URL can be configure to point to a path at the local filesystem).

The create_mock_data command processes all datasets. However, it is possible to limit this by adding positional arguments. These positional arguments can be dataset ids or paths to the location of the dataset.json on the local filesystem.

Furthermore, the command has some options, e.g. to change the default number of generated records (--size) or to reverse meaning of the positional arguments using --exclude.

To avoid duplicate primary keys on subsequent runs the --start-at options can be used to start autonumbering of primary keys at an offset.

E.g. to generate 5 records for the bag and gebieden datasets, starting the autonumbering of primary keys at 50.

    django create_mock_data bag gebieden --size 5 --start-at 50

To generate records for all datasets, except for the fietspaaltjes dataset:

    django create_mock_data fietspaaltjes --exclude  # or -x

To generate records for the bbga dataset, by loading the schema from the local filesystem:

    django create_mock_data <path-to-bbga-schema>/datasets.json

During record generation in create_mock_data, the relations are not added, so foreign key fields will be filled with NULL values.

There is a second management command relate_mock_data that can be used to add the relations. This command support positional arguments for datasets in the same way as create_mock_data.
Furthermore, the command also has the --exclude option to reverse the meaning of the positional dataset arguments.

E.g. to add relations to all datasets:

    django relate_mock_data

To add relations for bag and gebieden only:

    django relate_mock_data bag gebieden

To add relations for all datasets except meetbouten:

    django relate_mock_data meetbouten --exclude  # or -x

NB. When only a subset of the datasets is being mocked, the command can fail when datasets that are involved in a relation are missing, so make sure to include all relevant datasets.

For convenience an additional management command truncate_tables has been added, to truncate all tables.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

amsterdam-schema-tools-5.11.4.tar.gz (149.6 kB view details)

Uploaded Source

Built Distribution

amsterdam_schema_tools-5.11.4-py3-none-any.whl (161.6 kB view details)

Uploaded Python 3

File details

Details for the file amsterdam-schema-tools-5.11.4.tar.gz.

File metadata

File hashes

Hashes for amsterdam-schema-tools-5.11.4.tar.gz
Algorithm Hash digest
SHA256 b851472af50b1aabcd11f8dab0d05248d5763794ee9265dac12b6602486f9c6a
MD5 34624cc883b441c3fba01f18cf55f87d
BLAKE2b-256 0772bc485e803f84239c39782dba0d8748c7fd87b5233cc62a044d04e68fc665

See more details on using hashes here.

File details

Details for the file amsterdam_schema_tools-5.11.4-py3-none-any.whl.

File metadata

File hashes

Hashes for amsterdam_schema_tools-5.11.4-py3-none-any.whl
Algorithm Hash digest
SHA256 126e7617f5af2d33302de190432c241666ba9195d485b3674e209e9259a88241
MD5 6d118f163e504ff85f52eeb0a64ff2cb
BLAKE2b-256 6b728f368c31b366f12be1ce96d66d014508aac959cdc38942af19acddd7f406

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page