Skip to main content

elastic-wikidata

Project description

Elastic Wikidata

Simple CLI tools to load a subset of Wikidata into Elasticsearch. Part of the Heritage Connector project.


PyPI - Downloads GitHub last commit GitHub Pipenv locked Python version

Why?

Running text search programmatically on Wikidata means using the MediaWiki query API, either directly or through the Wikidata query service/SPARQL.

There are a couple of reasons you may not want to do this when running searches programmatically:

  • time constraints/large volumes: APIs are rate-limited, and you can only do one text search per SPARQL query
  • better search: using Elasticsearch allows for more flexible and powerful text search capabilities.* We're using our own Elasticsearch instance to do nearest neighbour search on embeddings, too.

* CirrusSearch is a Wikidata extension that enables direct search on Wikidata using Elasticsearch, if you require powerful search and are happy with the rate limit.

Installation

from pypi: pip install elastic_wikidata

from repo:

  1. Download
  2. cd into root
  3. pip install -e .

Setup

elastic-wikidata needs the Elasticsearch credentials ELASTICSEARCH_CLUSTER, ELASTICSEARCH_USER and ELASTICSEARCH_PASSWORD to connect to your ES instance. You can set these in one of three ways:

  1. Using environment variables: export ELASTICSEARCH_CLUSTER=https://... etc
  2. Using config.ini: pass the -c parameter followed by a path to an ini file containing your Elasticsearch credentials. Example here.
  3. Pass each variable in at runtime using options --cluster/-c, --user/-u, --password/-p.

Usage

Once installed the package is accessible through the keyword ew. A call is structured as follows:

ew <task> <options>

Task is either:

A full list of options can be found with ew --help, but the following are likely to be useful:

  • --index/-i: the index name to push to. If not specified at runtime, elastic-wikidata will prompt for it
  • --limit/-l: limit the number of records pushed into ES. You might want to use this for a small trial run before importing the whole thing.
  • --properties/-prop: pass a comma-separated list of properties to include in the ES index. E.g. p31,p21.
  • --language/-lang: Wikimedia language code. Only one supported at this time.

Loading from Wikidata dump (.ndjson)

ew dump -p <path_to_json> <other_options>

This is useful if you want to create one or more large subsets of Wikidata in different Elasticsearch indexes (millions of entities).

Time estimate: Loading all ~8million humans into an AWS Elasticsearch index took me about 20 minutes. Creating the humans subset using wikibase-dump-filter took about 3 hours using its instructions for parallelising.

  1. Download the complete Wikidata dump (latest-all.json.gz from here). This is a large file: 87GB on 07/2020.
  2. Use maxlath's wikibase-dump-filter to create a subset of the Wikidata dump.
  3. Run ew dump with flag -p pointing to the JSON subset. You might want to test it with a limit (using the -l flag) first.

Loading from SPARQL query

ew query -p <path_to_sparql_query> <other_options>

For smaller collections of Wikidata entities it might be easier to populate an Elasticsearch index directly from a SPARQL query rather than downloading the whole Wikidata dump to take a subset. ew query automatically paginates SPARQL queries so that a heavy query like 'return all the humans' doesn't result in a timeout error.

Time estimate: Loading 10,000 entities into Wikidata into an AWS hosted Elasticsearch index took me about 6 minutes.

  1. Write a SPARQL query and save it to a text/.rq file. See example.
  2. Run ew query with the -p option pointing to the file containing the SPARQL query. Optionally add a --page_size for the SPARQL query.

Temporary side effects

As of version 0.3.1 refreshing the search index is disabled for the duration of load by default, as recommended by ElasticSearch. Refresh is re-enabled to the default interval of 1s after load is complete. To disable this behaviour use the flag --no_disable_refresh/-ndr.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

elastic-wikidata-0.3.1.tar.gz (13.9 kB view details)

Uploaded Source

Built Distribution

elastic_wikidata-0.3.1-py3-none-any.whl (14.5 kB view details)

Uploaded Python 3

File details

Details for the file elastic-wikidata-0.3.1.tar.gz.

File metadata

  • Download URL: elastic-wikidata-0.3.1.tar.gz
  • Upload date:
  • Size: 13.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/47.1.0 requests-toolbelt/0.9.1 tqdm/4.48.2 CPython/3.8.5

File hashes

Hashes for elastic-wikidata-0.3.1.tar.gz
Algorithm Hash digest
SHA256 f7ca093c053c66699d4a21082da01e8f6d7dcf3c4416f2fae642a6d4a2a4ae42
MD5 88315801e37e429fcc35cc305b2dbf21
BLAKE2b-256 fbf1722671db3b71b5f37fd629c142a3868299fca9de2e4b5d308e80c984d535

See more details on using hashes here.

File details

Details for the file elastic_wikidata-0.3.1-py3-none-any.whl.

File metadata

  • Download URL: elastic_wikidata-0.3.1-py3-none-any.whl
  • Upload date:
  • Size: 14.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/47.1.0 requests-toolbelt/0.9.1 tqdm/4.48.2 CPython/3.8.5

File hashes

Hashes for elastic_wikidata-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 320735560f077b4bad8fecccfdea8d3482efaf365a46f359350b16c5f083f3a6
MD5 a26fa209f44cbb3a4c1bd30e68ac8f43
BLAKE2b-256 0eee0a57d23052088c8f6f35695c45f39fb46b86f9c46e6323dfbcbddd26df13

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page