Skip to main content

Scrapy spider imports for Meshagent datasets

Project description

Meshagent Scrapy

Spider a website with Scrapy and import page content into a Meshagent room dataset.

from meshagent.scrapy import import_domain_with_scrapy

result = await import_domain_with_scrapy(
    room,
    url="https://example.com",
    table="pages",
    namespace=["crawls"],
    limit=100,
    concurrency=5,
)

To test it through meshagent room connect:

meshagent room connect --room=my-room --identity=scrapy -- \
  python meshagent-sdk/meshagent-scrapy/examples/crawl.py \
  https://www.meshagent.com --table=sample --namespace=crawls --limit=100 --concurrency=5

The sample command writes progress to stderr while it imports. TTY output uses a single updating line; redirected output uses plain log lines. Pass --silent to suppress progress output.

Pass --concurrency or concurrency= to tune Scrapy's maximum concurrent requests.

Pass --batch-size or batch_size= to cap how many page records are merged into the content table at once. The crawler also flushes content batches by estimated payload size with --max-batch-bytes or max_batch_bytes=, which defaults to 16 MiB, and by elapsed time with --max-batch-delay or max_batch_delay=, which defaults to 5 minutes. The row-count cap defaults to 1000. Raw HTML rows can be large, so prefer lowering the byte limit before lowering the row count if the room server reports Lance/DataFusion merge memory exhaustion while importing full HTML pages.

The crawler sends a browser-like User-Agent by default. Pass --user-agent or user_agent= to override it for a specific crawl.

The default extractor writes page content as markdown in the text column. Use --format=html to keep HTML, --format=text to strip markup to plain text, or pass content_format= from library code.

By default, the crawler runs Trafilatura cleanup before converting markdown/text content, which strips common navigation, footer, sidebar, and ad boilerplate. For --format=html, the default is to strip scripts and inline image data URLs while preserving the rest of the HTML. Use --strip= with comma-separated values like scripts, css, whitespace, image-data-urls, or clean to choose the HTML stripping steps, or --strip=none to process the raw response body.

The CLI persists crawl frontier state by default in <table>__frontier, so a limited run can be resumed by running the same command again:

meshagent room connect --room=my-room --identity=scrapy -- \
  python meshagent-sdk/meshagent-scrapy/examples/crawl.py \
  https://www.meshagent.com --table=sample --namespace=crawls --limit=100

Pass --frontier-table to choose a different state table, or --no-resume to run without frontier persistence. Library callers can opt in with resume=True. Frontier updates are buffered before they are written; tune that with --frontier-batch-size or the library frontier_batch_size= argument. Failed URLs are not retried on resume unless you pass --retry-failed or retry_failed=True.

The crawler creates indexes by default: a BTREE index on the page table primary key, plus BTREE url and BITMAP status indexes on the frontier table. Pass --index=text or index_columns=("text",) to also create an INVERTED index on text. Pass --no-indexes or create_indexes=False to skip all automatic index creation. It also runs dataset optimization periodically while importing and shows optimizing/optimized in progress output. Tune that with --optimize-every or optimize_every=, and use 0 on the CLI or None in library code to disable automatic optimization.

By default, the crawler imports textual responses only, based on Content-Type values containing text/, html, xml, or json. Pass --response-filter or response_filter= to replace that default with a JMESPath expression over url, status, headers, content_type, and content_type_lower. Header names are lower-cased, so an HTML-only crawl can use:

--response-filter "contains(headers.\"content-type\", 'text/html')"

By default, records are merged on url with the columns url, date, content_type, text, and images. text is markdown unless another content format is selected. images is a struct array with src and alt only, and inline image data URLs are excluded.

Pass an async extract= callback to derive custom columns from the Scrapy response and content bytes. Return None from the callback to skip the record. Pass an async progress= callback to observe import progress from library code.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

meshagent_scrapy-0.39.5.tar.gz (28.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

meshagent_scrapy-0.39.5-py3-none-any.whl (20.1 kB view details)

Uploaded Python 3

File details

Details for the file meshagent_scrapy-0.39.5.tar.gz.

File metadata

  • Download URL: meshagent_scrapy-0.39.5.tar.gz
  • Upload date:
  • Size: 28.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for meshagent_scrapy-0.39.5.tar.gz
Algorithm Hash digest
SHA256 47c96409daf1f3d3e780f5cd95820cb8a395fe43df3db93f366d575fc4aa1cf4
MD5 d33593c8dc00ecb96025ef2074c468f8
BLAKE2b-256 960f160f09f0681e0b9014d48167a7acd842cad5aef750535bd8e971ef45a6c6

See more details on using hashes here.

File details

Details for the file meshagent_scrapy-0.39.5-py3-none-any.whl.

File metadata

File hashes

Hashes for meshagent_scrapy-0.39.5-py3-none-any.whl
Algorithm Hash digest
SHA256 73ad1da3c349b7aba5ad6055b1789bc2acc7364bc2741f62d319eebf5c27e9f5
MD5 0979e213b92ec537bbb7b7083fda41d3
BLAKE2b-256 885c041180c9a78a3fbd444d0a0bd060d1808a100a31e6deff36d8619203fd21

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page