Skip to main content

A Python async client for the MeiliSearch API

Project description

MeiliSearch Python Async

Tests Status pre-commit.ci status Coverage PyPI version PyPI - Python Version

Meilisearch Python Async is a Python async client for the MeiliSearch API. MeiliSearch also has an official Python client.

Which of the two clients to use comes down to your particular use case. The purpose for this async client is to allow for non-blocking calls when working in async frameworks such as FastAPI, or if your own code base you are working in is async. If this does not match your use case then the official client will be a better choice.

Installation

Using a virtual environmnet is recommended for installing this package. Once the virtual environment is created and activated install the package with:

pip install meilisearch-python-async

Run MeiliSearch

There are several ways to run MeiliSearch. Pick the one that works best for your use case and then start the server.

As as example to use Docker:

docker pull getmeili/meilisearch:latest
docker run -it --rm -p 7700:7700 getmeili/meilisearch:latest ./meilisearch --master-key=masterKey

Useage

Add Documents

  • Note: `client.index("books") creates an instance of an Index object but does not make a network call to send the data yet so it does not need to be awaited.
from meilisearch_python_async import Client

async with Client('http://127.0.0.1:7700', 'masterKey') as client:
    index = client.index("books")

    documents = [
        {"id": 1, "title": "Ready Player One"},
        {"id": 42, "title": "The Hitchhiker's Guide to the Galaxy"},
    ]

    await index.add_documents(documents)

The server will return an update id that can be used to get the status of the updates. To do this you would save the result response from adding the documets to a variable, this will be a UpdateId object, and use it to check the status of the updates.

update = await index.add_documents(documents)
status = await client.index('books').get_update_status(update.update_id)

Add Documents In Batches

Splitting documents into batches can be useful with large dataset because it reduces the RAM usage during indexing.

from meilisearch_python_async import Client

async with Client('http://127.0.0.1:7700', 'masterKey') as client:
    index = client.index("books")

    documents = [
        {"id": 1, "title": "Ready Player One"},
        {"id": 42, "title": "The Hitchhiker's Guide to the Galaxy"},
        ...
    ]

    await index.add_documents_in_batches(documents, batch_size=100)

The server will return a list of update ids that can be used to get the status of the updates. To do this you would save the result response from adding the documets to a variable, this will be a list of UpdateId objects, and use it to check the status of the updates.

updates = await index.add_documents_in_batches(documents, batch_size=20)
for update in updates:
    status = await client.index('books').get_update_status(update.update_id)

Basic Searching

search_result = await index.search("ready player")

Base Search Results: SearchResults object with values

SearchResults(
    hits = [
        {
            "id": 1,
            "title": "Ready Player One",
        },
    ],
    offset = 0,
    limit = 20,
    nb_hits = 1,
    exhaustive_nb_hits = bool,
    facets_distributionn = None,
    processing_time_ms = 1,
    query = "ready player",
)

Custom Search

Information about the parameters can be found in the search parameters section of the documentation.

index.search(
    "guide",
    attributes_to_highlight=["title"],
    filters="book_id > 10"
)

Custom Search Results: SearchResults object with values

SearchResults(
    hits = [
        {
            "id": 42,
            "title": "The Hitchhiker's Guide to the Galaxy",
            "_formatted": {
                "id": 42,
                "title": "The Hitchhiker's Guide to the <em>Galaxy</em>"
            }
        },
    ],
    offset = 0,
    limit = 20,
    nb_hits = 1,
    exhaustive_nb_hits = bool,
    facets_distributionn = None,
    processing_time_ms = 5,
    query = "galaxy",
)

Documentation

See our docs for the full documentation.

Compatibility with MeiliSearch

This package only guarantees the compatibility with version v0.24 of MeiliSearch.

Contributing

Contributions to this project are welcome. If you are interesting in contributing please see our contributing guide

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

meilisearch-python-async-0.20.1.tar.gz (19.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

meilisearch_python_async-0.20.1-py3-none-any.whl (21.6 kB view details)

Uploaded Python 3

File details

Details for the file meilisearch-python-async-0.20.1.tar.gz.

File metadata

  • Download URL: meilisearch-python-async-0.20.1.tar.gz
  • Upload date:
  • Size: 19.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.11 CPython/3.9.8 Linux/5.11.0-1021-azure

File hashes

Hashes for meilisearch-python-async-0.20.1.tar.gz
Algorithm Hash digest
SHA256 cb5f86a74a2fb8dc531d9facec56e5681d38ca5344d8ce26623e2fab1c744e89
MD5 7cfd8ca3a7322c0bdff52a80a7db4afd
BLAKE2b-256 303a13bcce03e5eb19c363b783d9f1d40b6a9e893f9bbf4a964acf0b681f3ee6

See more details on using hashes here.

File details

Details for the file meilisearch_python_async-0.20.1-py3-none-any.whl.

File metadata

File hashes

Hashes for meilisearch_python_async-0.20.1-py3-none-any.whl
Algorithm Hash digest
SHA256 f829f99ed7319c67ce6081a25814365ddf14b133fefa254386c592f2b6e9ad16
MD5 2a1b9446fd06a1fa7e721102b0e7c780
BLAKE2b-256 83c60887221864db9172812df9c7037233534d256c4bd7bcee47d844d30a22a3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page