A Scrapy pipeline that batches and indexes items into Meilisearch, with task tracking and index configuration.
Project description
๐ท๏ธ Scrapy โ Meilisearch Pipeline
A Scrapy pipeline that batches items and indexes them into Meilisearch, with optional index creation and index settings using the modern Meilisearch Python client.
โจ Features
- โ Uses the official modern Meilisearch client (TaskInfo / Pydantic models)
- ๐งฐ Optional index creation (with
primaryKey) and index settings update - ๐ฆ Batching of items before insertion
- ๐ Task tracking with status check (failed tasks are logged and stored)
- ๐งช Example Scrapy project + Docker Compose for Meilisearch
- ๐งน Tooling:
uv,pytest,ruff,black,mypy,justtasks
๐ง How batching works (pipeline logic)
The pipeline keeps two internal buffers:
_bufferโ a list of items waiting to be sent to Meilisearch_tasksโ a list of Meilisearch TaskInfo objects created byadd_documents()andupdate_settings()
Flow:
process_itemconverts an item todictand pushes it into_buffer.- When
_bufferlength reachesMEILI_BATCH_SIZE, the pipeline performs a flush:- Sends the whole
_bufferwithindex.add_documents(batch) - Appends the returned TaskInfo to
_tasks - Calls
_check_all_tasks(): waits on all tasks in_tasksviawait_for_task()and- if any task ends with
status="failed", it is moved to_failed_tasks - otherwise it is discarded (success) โ
_tasksis cleared
- if any task ends with
- Sends the whole
close_spider:- If
_bufferstill has items, a final flush is executed (and tasks checked) - If
_tasksstill contains tasks (e.g., settings only), they are checked - If any failed tasks were detected, they are logged (no exception is raised by design)
- If
Benefits of this approach:
- Minimal memory use (bounded by
MEILI_BATCH_SIZE) - Early surfacing of Meilisearch task failures during the crawl
- Predictable and simple control flow
๐ฆ Installation
From PyPI:
pip install scrapy-meili-pipeline
Using uv:
uv add scrapy-meili-pipeline
โ๏ธ Settings
Add the pipeline to Scrapy and configure Meilisearch via settings:
ITEM_PIPELINES = {
"scrapy_meili_pipeline.MeiliSearchPipeline": 300,
}
MEILI_URL = "http://127.0.0.1:7700"
MEILI_API_KEY = "masterKey" # or None
MEILI_INDEX = "articles" # required
MEILI_PRIMARY_KEY = "id" # optional
MEILI_INDEX_SETTINGS = { # optional
"filterableAttributes": ["author", "categories", "keywords", "rating"],
"sortableAttributes": ["published_at", "rating"],
"searchableAttributes": ["title", "summary", "content", "keywords"],
}
MEILI_BATCH_SIZE = 500
MEILI_TASK_TIMEOUT = 180
MEILI_TASK_INTERVAL = 1
This library supports ONLY the modern Meilisearch client and expects TaskInfo objects with a
task_uidattribute.
๐ Quick example (Scrapy spider)
class ArticleSpider(Spider):
name = "articles"
custom_settings = {
"MEILI_INDEX": "news",
"MEILI_BATCH_SIZE": 200,
"MEILI_INDEX_SETTINGS": {"filterableAttributes": ["site", "tags"]},
}
def parse(self, response):
yield {
"id": response.url,
"title": response.css("h1::text").get(),
"author": response.css(".author::text").get(),
"content": response.css("article::text").getall(),
"rating": 4,
}
๐งช Example project & Meilisearch (examples/)
This repo ships with a runnable example under examples/ that scrapes the public test site
https://webscraper.io/test-sites/e-commerce/allinone and indexes product tiles into Meilisearch.
Start Meilisearch with Docker
cd examples
docker compose up -d
Meilisearch UI: http://127.0.0.1:7700
Run the example spider (via Just)
From the repository root:
just example
What the example task does:
- switches to
examples/simple_project - runs
scrapy crawl demo -s LOG_LEVEL=INFO
If you prefer running it manually:
cd examples/simple_project
uv run scrapy crawl demo -s LOG_LEVEL=INFO
๐งฑ Project structure
scrapy-meili-pipeline/
โโโ src/
โ โโโ scrapy_meili_pipeline/
โ โโโ __init__.py
โ โโโ meili_pipeline.py
โโโ tests/
โ โโโ test_pipeline.py
โโโ examples/
โ โโโ README.md
โ โโโ .env.example
โ โโโ docker-compose.meilisearch.yml
โ โโโ simple_project/
โ โโโ scrapy.cfg
โ โโโ simple_project/
โ โโโ __init__.py
โ โโโ settings.py
โ โโโ sitecustomize.py
โ โโโ spiders/
โ โโโ demo_spider.py
โโโ Justfile
โโโ pyproject.toml
โโโ README.md
โโโ LICENSE
โโโ .github/
โโโ workflows/
โโโ ci.yml
โโโ publish.yml
๐ ๏ธ Development
Using uv + just:
just sync # install all deps (dev included)
just check # ruff + black --check + mypy + pytest
just test # run unit tests
just coverage # terminal coverage
just coverage-html # HTML coverage at ./htmlcov/index.html
just build # build wheel + sdist (uv build)
just publish # publish to PyPI (uv publish)
Manual (without just):
uv sync --all-extras --dev
uv run ruff check .
uv run black --check .
uv run mypy .
uv run pytest
uv run pytest --cov=src --cov-report=html
uv build
uv publish
๐ License
Released under the MIT License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file scrapy_meili_pipeline-0.1.1.tar.gz.
File metadata
- Download URL: scrapy_meili_pipeline-0.1.1.tar.gz
- Upload date:
- Size: 11.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.9.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
13a49f98a6e297f628517759c5bf978658fc2838afe8cb11b946b4bdb63a3d8d
|
|
| MD5 |
39c6358b54375bd98dee878d11fecf0e
|
|
| BLAKE2b-256 |
a7733a7e3a0a690d94171ee134f26e564bf872c46eab285dce00c2afa58b9f8a
|
File details
Details for the file scrapy_meili_pipeline-0.1.1-py3-none-any.whl.
File metadata
- Download URL: scrapy_meili_pipeline-0.1.1-py3-none-any.whl
- Upload date:
- Size: 8.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.9.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
aaa2643befd75a1430f28a5259a1ab635391dda71950b5c2ebc31e1a78a16236
|
|
| MD5 |
7e0f66ccebfa9a0a1d37c5a30a0626a1
|
|
| BLAKE2b-256 |
066d669d20d79f428dfc23364a88efa98deba2a4db5870502badbf253b43e004
|