Backend for Karp
Project description
karp-backend
master
This package - code and documentation - is still under construction.
Karp is the lexical platform of Språkbanken. Now migrated to Python 3.6+.
Karp in Docker
For easy testing, use Docker to run Karp-b.
-
Follow the steps given here
-
Run
docker-compose up -d
-
Test it by running
curl localhost:8081/app/test
If you want to use Karp without Docker, keep on reading.
Prerequisites
- ElasticSearch6
- SQL, preferrably MariaDB
- a WSGI server for example mod_wsgi with Apache, Waitress, Gunicorn, uWSGI. . .
- an authentication server. Read more about this here
- Python >= 3.6 with pip
Installation
Karp uses virtuals envs for python. To get running:
- run
make install
- or:
- Create the virtual environment using
python3 -m venv venv
. - Activate the virtual environment with
source venv/bin/activate
. pip install -r requirements.txt
- Create the virtual environment using
Configuration
Set the environment varibles KARP5_INSTANCE_PATH
and KARP5_ELASTICSEARCH_URL
:
- using
export VAR=value
- or creating a file
.env
in the root of your cloned path withVAR=value
KARP5_INSTANCE_PATH
- the path where your configs are. If you have cloned this repo you can use/path/to/karp-backend/
.KARP5_ELASTICSEARCH_URL
- the url to elasticsearch. Typicallylocalhost:9200
Copy config.json.example
to config.json
and make your changes.
You will also need to make configurations for your lexicons.
Read more here.
Tests
TODO: DO MORE TESTS!
Run the tests by typing: make test
Test that karp-backend
is working by starting it
make run
or python run.py
Known bugs
Counts from the statistics
call may not be accurate when performing
subaggregations (multiple buckets) on big indices unless the query
restricts the search space. Using
breadth_first
mode does not (always) help.
Possible workarounds:
- use composite aggregation instead, but this does not work with filtering.
- set a bigger shard_size (27 000 works for saldo), but this might break your ES cluster.
- have smaller indices (one lexicon per index) but this does not help for big lexicons or statistics over many lexicons.
- don't allow deeper subaggregations than 2. Chaning the
size
won't help.
Elasticsearch
If saving stops working because of Database Exception: Error during update. Message: TransportError(403, u'cluster_block_exception', u'blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];').
, you need to unlock the relevant ES index.
This is how you do it:
Repeat for every combination of host
and port
that is relevant for you. But you only need to do it once per cluster.
- Check if any index is locked:
curl <host>:<port>/_all/_settings/index.blocks*
- If all is open, Elasticsearch answers with
{}
- else it answers with
{<index>: { "settings": { "index": { "blocks": {"read_only_allow_delete": "true"} } } }, ... }
- If all is open, Elasticsearch answers with
- To unlock all locked indices on a
host
andport
:curl -X PUT <host>:<port>/_all/_settings -H 'Content-Type: application' -d '{"index.blocks.read_only_allow_delete": null}'
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for karp_backend_5-5.24.9-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | dba61da7a02d44d58d7025d0fa56002c5d8726b706d46567d811b2176f8768d6 |
|
MD5 | 2bc5d54f567768cbc41f5da98f4cb8d0 |
|
BLAKE2b-256 | b4327bb3342e6a083f42a5344473d2117bb241bf167b1f5f8be1c0300dae40e2 |