Skip to main content

A Scrapy pipeline that batches and indexes items into Meilisearch, with task tracking and index configuration.

Project description

๐Ÿ•ท๏ธ Scrapy โ†’ Meilisearch Pipeline

PyPI CI License: MIT

A Scrapy pipeline that batches items and indexes them into Meilisearch, with optional index creation and index settings using the modern Meilisearch Python client.


โœจ Features

  • โœ… Uses the official modern Meilisearch client (TaskInfo / Pydantic models)
  • ๐Ÿงฐ Optional index creation (with primaryKey) and index settings update
  • ๐Ÿ“ฆ Batching of items before insertion
  • ๐Ÿ”Ž Task tracking with status check (failed tasks are logged and stored)
  • ๐Ÿงช Example Scrapy project + Docker Compose for Meilisearch
  • ๐Ÿงน Tooling: uv, pytest, ruff, black, mypy, just tasks

๐Ÿง  How batching works (pipeline logic)

The pipeline keeps two internal buffers:

  1. _buffer โ†’ a list of items waiting to be sent to Meilisearch
  2. _tasks โ†’ a list of Meilisearch TaskInfo objects created by add_documents() and update_settings()

Flow:

  1. process_item converts an item to dict and pushes it into _buffer.
  2. When _buffer length reaches MEILI_BATCH_SIZE, the pipeline performs a flush:
    • Sends the whole _buffer with index.add_documents(batch)
    • Appends the returned TaskInfo to _tasks
    • Calls _check_all_tasks(): waits on all tasks in _tasks via wait_for_task() and
      • if any task ends with status="failed", it is moved to _failed_tasks
      • otherwise it is discarded (success) โ€” _tasks is cleared
  3. close_spider:
    • If _buffer still has items, a final flush is executed (and tasks checked)
    • If _tasks still contains tasks (e.g., settings only), they are checked
    • If any failed tasks were detected, they are logged (no exception is raised by design)

Benefits of this approach:

  • Minimal memory use (bounded by MEILI_BATCH_SIZE)
  • Early surfacing of Meilisearch task failures during the crawl
  • Predictable and simple control flow

๐Ÿ“ฆ Installation

From PyPI:

pip install scrapy-meili-pipeline

Using uv:

uv add scrapy-meili-pipeline

โš™๏ธ Settings

Add the pipeline to Scrapy and configure Meilisearch via settings:

ITEM_PIPELINES = {
    "scrapy_meili_pipeline.MeiliSearchPipeline": 300,
}

MEILI_URL = "http://127.0.0.1:7700"
MEILI_API_KEY = "masterKey"          # or None
MEILI_INDEX = "articles"             # required
MEILI_PRIMARY_KEY = "id"             # optional

MEILI_INDEX_SETTINGS = {             # optional
    "filterableAttributes": ["author", "categories", "keywords", "rating"],
    "sortableAttributes": ["published_at", "rating"],
    "searchableAttributes": ["title", "summary", "content", "keywords"],
}

MEILI_BATCH_SIZE = 500
MEILI_TASK_TIMEOUT = 180
MEILI_TASK_INTERVAL = 1

This library supports ONLY the modern Meilisearch client and expects TaskInfo objects with a task_uid attribute.


๐Ÿš€ Quick example (Scrapy spider)

class ArticleSpider(Spider):
    name = "articles"

    custom_settings = {
        "MEILI_INDEX": "news",
        "MEILI_BATCH_SIZE": 200,
        "MEILI_INDEX_SETTINGS": {"filterableAttributes": ["site", "tags"]},
    }

    def parse(self, response):
        yield {
            "id": response.url,
            "title": response.css("h1::text").get(),
            "author": response.css(".author::text").get(),
            "content": response.css("article::text").getall(),
            "rating": 4,
        }

๐Ÿงช Example project & Meilisearch (examples/)

This repo ships with a runnable example under examples/ that scrapes the public test site https://webscraper.io/test-sites/e-commerce/allinone and indexes product tiles into Meilisearch.

Start Meilisearch with Docker

cd examples
docker compose up -d

Meilisearch UI: http://127.0.0.1:7700

Run the example spider (via Just)

From the repository root:

just example

What the example task does:

  • switches to examples/simple_project
  • runs scrapy crawl demo -s LOG_LEVEL=INFO

If you prefer running it manually:

cd examples/simple_project
uv run scrapy crawl demo -s LOG_LEVEL=INFO

๐Ÿงฑ Project structure

scrapy-meili-pipeline/
โ”œโ”€โ”€ src/
โ”‚   โ””โ”€โ”€ scrapy_meili_pipeline/
โ”‚       โ”œโ”€โ”€ __init__.py
โ”‚       โ””โ”€โ”€ meili_pipeline.py
โ”œโ”€โ”€ tests/
โ”‚   โ””โ”€โ”€ test_pipeline.py
โ”œโ”€โ”€ examples/
โ”‚   โ”œโ”€โ”€ README.md
โ”‚   โ”œโ”€โ”€ .env.example
โ”‚   โ”œโ”€โ”€ docker-compose.meilisearch.yml
โ”‚   โ””โ”€โ”€ simple_project/
โ”‚       โ”œโ”€โ”€ scrapy.cfg
โ”‚       โ””โ”€โ”€ simple_project/
โ”‚           โ”œโ”€โ”€ __init__.py
โ”‚           โ”œโ”€โ”€ settings.py
โ”‚           โ”œโ”€โ”€ sitecustomize.py
โ”‚           โ””โ”€โ”€ spiders/
โ”‚               โ””โ”€โ”€ demo_spider.py
โ”œโ”€โ”€ Justfile
โ”œโ”€โ”€ pyproject.toml
โ”œโ”€โ”€ README.md
โ”œโ”€โ”€ LICENSE
โ””โ”€โ”€ .github/
    โ””โ”€โ”€ workflows/
        โ”œโ”€โ”€ ci.yml
        โ””โ”€โ”€ publish.yml

๐Ÿ› ๏ธ Development

Using uv + just:

just sync            # install all deps (dev included)
just check           # ruff + black --check + mypy + pytest
just test            # run unit tests
just coverage        # terminal coverage
just coverage-html   # HTML coverage at ./htmlcov/index.html
just build           # build wheel + sdist (uv build)
just publish         # publish to PyPI (uv publish)

Manual (without just):

uv sync --all-extras --dev
uv run ruff check .
uv run black --check .
uv run mypy .
uv run pytest
uv run pytest --cov=src --cov-report=html
uv build
uv publish

๐Ÿ“œ License

Released under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scrapy_meili_pipeline-0.1.1.tar.gz (11.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

scrapy_meili_pipeline-0.1.1-py3-none-any.whl (8.2 kB view details)

Uploaded Python 3

File details

Details for the file scrapy_meili_pipeline-0.1.1.tar.gz.

File metadata

File hashes

Hashes for scrapy_meili_pipeline-0.1.1.tar.gz
Algorithm Hash digest
SHA256 13a49f98a6e297f628517759c5bf978658fc2838afe8cb11b946b4bdb63a3d8d
MD5 39c6358b54375bd98dee878d11fecf0e
BLAKE2b-256 a7733a7e3a0a690d94171ee134f26e564bf872c46eab285dce00c2afa58b9f8a

See more details on using hashes here.

File details

Details for the file scrapy_meili_pipeline-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for scrapy_meili_pipeline-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 aaa2643befd75a1430f28a5259a1ab635391dda71950b5c2ebc31e1a78a16236
MD5 7e0f66ccebfa9a0a1d37c5a30a0626a1
BLAKE2b-256 066d669d20d79f428dfc23364a88efa98deba2a4db5870502badbf253b43e004

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page