Skip to main content

Karp backend

Project description

Karp backend

Build Status CodeScene Code Health codecov

This in the version 6 of Karp backend, for the legacy version (v5).

Setup

This project uses poetry and MariaDB.

  1. Run make install or make install-dev for a develop-install

  2. Install MariaDB and create a database

  3. Setup environment variables (can be placed in a .env file in the root and then ? poetry run sets those):

    export DB_DATABASE=<name of database>
    export DB_USER=<database user>
    export DB_PASSWORD=<user's password>
    export DB_HOST=localhost
    export AUTH_JWT_PUBKEY_PATH=/path/to/pubkey
    
  4. Activate the virtual environment by running: poetry shell

  5. Run make init-db to initialize database or poetry run alembic upgrade head

  6. Run make serve or make serve-w-reload to start development server

    or poetry shell and then uvicorn asgi:app

  7. To setup Elasticsearch, download Elasticsearch 6.x or 7.x and start it

  8. Install elasticsearch python libs for the right version

    1. If you use Elasticsearch 6.x, run source <VENV_NAME>/bin/activate and pip install -e .[elasticsearch6]
    2. If you use Elasticsearch 7.x, run source <VENV_NAME>/bin/activate and pip install -e .[elasticsearch7]
  9. Add environment variables

export ES_ENABLED=true
export ELASTICSEARCH_HOST=localhost:9200

Create test resources

  1. poetry shell and then:
  2. karp-cli entry-repo create karp/tests/data/config/places.json
  3. karp-cli resource create karp/tests/data/config/places.json
  4. karp-cli entries import places tests/data/places.jsonl
  5. Do the same for municipalities
  6. karp-cli resource publish places
  7. karp-cli resource publish municipalities

Pre-processing data before publishing

** TODO: review this ** Can be used to have less downtime, because sometimes the preprocessing may be faster on another machine than the machine that talks to Elasticsearch. Do create and import on both machines, with the same data. Use machine 1 to preprocess and use result on machine 2.

  1. Create resource and import data as usual.

  2. Run karp-cli preprocess --resource_id places --version 2 --filename places_preprocessed

    places_preprocessed will contain a pickled dataset containing everything that is needed

  3. Run karp-cli publish_preprocessed --resource_id places --version 2 --data places_preprocessed

  4. Alternatively run karp-cli reindex_preprocessed --resource_id places --data places_preprocessed , if the resource was already published.

Technologies

Python

  • Poetry
  • FastAPI
  • SQLAlchemy
  • Typer
  • Elasticsearch
  • Elasticsearch DSL

Databases

  • MariaDB
  • Elasticsearch

Development

Version handling

Version can be bumped with bumpversion.

Usage:

  • Increase patch number a.b.X => a.b.(X+1): bumpversion patch
  • Increase minor number a.X.c => a.(X+1).c: bumpversion minor
  • Increase major number X.b.c => (X+1).b.c: bumpversion major
  • To custom version a.b.c => X.Y.Z: bumpversion --new-version X.Y.Z

bumpversion is configured in .bumpversion.cfg.

The version is changed in the following files:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

karp-backend-6.0.18.tar.gz (210.9 kB view hashes)

Uploaded Source

Built Distribution

karp_backend-6.0.18-py3-none-any.whl (289.3 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page