Skip to main content

Scrapy spider for TW Rental House

Project description

TW Rental House Utility for Scrapy

This package is built for crawling Taiwanese rental house related website using Scrapy. As behavior of crawlers may differ from their goal, scale, and pipeline, this package provides only minimum feature set, which allow developer to list and decode a rental house web page into structured data, without knowing too much about detail HTML and API structure of each website. In addition, this package is also designed for extensibility, which allow developers to insert customized callback, manipulate data, and integrate with existing crawler structure.

Although this package provide the ability to crawl rental house website, it's developer's responsibility to ensure crawling mechanism and usage of data. Please be friendly to target website, such as consider using DOWNLOAD_DELAY or AUTO_THROTTLING to prevent bulk requesting.

Requirement

  1. Python 3.10+
  2. Playwright (for 591 spiders)
  3. PaddleOCR (for 591 spiders)

Installation

poetry add scrapy-tw-rental-house

Install Playwright

We use Playwright default browser (Chromium) to render JavaScript content. Please install Playwright Chromium before using this package.

For more information, please refer to official document

poetry shell
playwright install chromium

591 specific

As 591 implements anti-crawler mechanism, it require additional setup to bypass it. To enable Playwright to bypass 591 anti-crawler mechanism, please ensure you get access to browser developer tool on browsing 591, and copy the setting to settings.py.

BROWSER_INIT_SCRIPT = 'console.log("This command enable Playwright")'

Configure OCR cache

As OCR is a time consuming process, we provide a cache mechanism to store OCR result. When OCR cache is enabled, before performing OCR on an image, the crawler will check if the image hash exists in the cache. If it exists, the cached result will be used instead of performing OCR again. OCR cache is enabled by default.

To disable OCR cache, please configure scrapy settings.py as following:

# Enable OCR cache
OCR_CACHE_ENABLED = False

You can also customize the cache directory by setting OCR_CACHE_DIR:

# Customize OCR cache directory
OCR_CACHE_DIR = 'path/to/ocr_cache' # default to ocr_cache

Speed up browser page loading

This package support skip specific domain request and cache JS.

# Enable cache for JS
BROWSER_JS_CACHE_ENABLED = True # default to True
BROWSER_JS_CACHE_DIR = 'path/to/cache' # default to js_cache

# Enable skip specific domain request
BROWSER_SKIP_DOMAIN = [
    'https://the.unnecessary.domain',
]

Basic Usage

This package currently support 591. Each rental house website is a Scrapy Spider class. You can either crawl entire website using default setting , which will take couple days, or customize the behaviour base on your need.

The most basic usage would be creating a new Spider class that inherit Rental591Spider:

from scrapy_twrh.spiders.rental591 import Rental591Spider

class MyAwesomeSpider(Rental591Spider):
    name='awesome'

And than start crawling by

scrapy crawl awesome

Please see example for detail usage.

Items

All spiders populates 2 type of Scrapy items: GenericHouseItem and RawHouseItem.

GenericHouseItem contains normalized data field, spirders from different website will decode their data and fit into this schema in best effort.

RawHouseItem contains unnormalized data field, which keep original and structured data in best effort.

Note that both item are super set of schema. It developer's responsibility to check which field is provided when receiving an item. For example, in Rental591Spider, for a single rental house, Scrapy will get:

  1. 1x RawHouseItem + 1x GenericHouseItem during listing all houses, which provide only minimun data field for GenericHouseItem
  2. 1x RawHouseItem + 1x GenericHouseItem during retrieving house detail.

Handlers

All spiders in this package provide the following handlers:

  1. start_list, similiar to start_requests in Scrapy, control how crawler issue search/list request to find all rental houses.
  2. parse_list, similiar to parse in Scrapy, control how crawler handles response from start_list and generate request for detail house info page.
  3. parse_detail, control how crawler parse detail page.

All spiders implements their own default handler, say, default_start_list, default_parse_list, and default_parse_detail, and can be overwrite during __init__. Please see example for how to control spider behavior using handlers.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scrapy_tw_rental_house-2.1.6.tar.gz (28.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

scrapy_tw_rental_house-2.1.6-py3-none-any.whl (31.2 kB view details)

Uploaded Python 3

File details

Details for the file scrapy_tw_rental_house-2.1.6.tar.gz.

File metadata

  • Download URL: scrapy_tw_rental_house-2.1.6.tar.gz
  • Upload date:
  • Size: 28.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.7.1 CPython/3.10.12 Linux/6.8.0-88-generic

File hashes

Hashes for scrapy_tw_rental_house-2.1.6.tar.gz
Algorithm Hash digest
SHA256 8fb41d050842e557f1f2d040a737d97d01cd3a4c39d42e9a77ae7a6aa762c915
MD5 9b216ad7b41b371c55fb3ba1aea5dc05
BLAKE2b-256 adf783b96e0a4fc6af457215f34f0f0bb84d1590408e0629b3fe515827d8e20c

See more details on using hashes here.

File details

Details for the file scrapy_tw_rental_house-2.1.6-py3-none-any.whl.

File metadata

File hashes

Hashes for scrapy_tw_rental_house-2.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 b95ca5a5a43e12d20f9f90633f27759f625e9da8bd0f06d64371da9dbfb65ad9
MD5 519c77feda6f72a7db4b783ea4d2caab
BLAKE2b-256 2bb76132af34dbf869df294012c7a5a83938c0d6a5af6744b8f63215d3526dc0

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page