Skip to main content

Scrapy spider for TW Rental House

Project description

TW Rental House Utility for Scrapy

This package is built for crawling Taiwanese rental house related website using Scrapy. As behavior of crawlers may differ from their goal, scale, and pipeline, this package provides only minimum feature set, which allow developer to list and decode a rental house web page into structured data, without knowing too much about detail HTML and API structure of each website. In addition, this package is also designed for extensibility, which allow developers to insert customized callback, manipulate data, and integrate with existing crawler structure.

Although this package provide the ability to crawl rental house website, it's developer's responsibility to ensure crawling mechanism and usage of data. Please be friendly to target website, such as consider using DOWNLOAD_DELAY or AUTO_THROTTLING to prevent bulk requesting.

Requirement

  1. Python 3.10+
  2. Playwright (for 591 spiders)
  3. PaddleOCR (for 591 spiders)

Installation

poetry add scrapy-tw-rental-house

Install Playwright

We use Playwright default browser (Chromium) to render JavaScript content. Please install Playwright Chromium before using this package.

For more information, please refer to official document

poetry shell
playwright install chromium

591 specific

As 591 implements anti-crawler mechanism, it require additional setup to bypass it. To enable Playwright to bypass 591 anti-crawler mechanism, please ensure you get access to browser developer tool on browsing 591, and copy the setting to settings.py.

BROWSER_INIT_SCRIPT = 'console.log("This command enable Playwright")'

Configure OCR cache

As OCR is a time consuming process, we provide a cache mechanism to store OCR result. When OCR cache is enabled, before performing OCR on an image, the crawler will check if the image hash exists in the cache. If it exists, the cached result will be used instead of performing OCR again. OCR cache is enabled by default.

To disable OCR cache, please configure scrapy settings.py as following:

# Enable OCR cache
OCR_CACHE_ENABLED = False

You can also customize the cache directory by setting OCR_CACHE_DIR:

# Customize OCR cache directory
OCR_CACHE_DIR = 'path/to/ocr_cache' # default to ocr_cache

Speed up browser page loading

This package support skip specific domain request and cache JS.

# Enable cache for JS
BROWSER_JS_CACHE_ENABLED = True # default to True
BROWSER_JS_CACHE_DIR = 'path/to/cache' # default to js_cache

# Enable skip specific domain request
BROWSER_SKIP_DOMAIN = [
    'https://the.unnecessary.domain',
]

Basic Usage

This package currently support 591. Each rental house website is a Scrapy Spider class. You can either crawl entire website using default setting , which will take couple days, or customize the behaviour base on your need.

The most basic usage would be creating a new Spider class that inherit Rental591Spider:

from scrapy_twrh.spiders.rental591 import Rental591Spider

class MyAwesomeSpider(Rental591Spider):
    name='awesome'

And than start crawling by

scrapy crawl awesome

Please see example for detail usage.

Items

All spiders populates 2 type of Scrapy items: GenericHouseItem and RawHouseItem.

GenericHouseItem contains normalized data field, spirders from different website will decode their data and fit into this schema in best effort.

RawHouseItem contains unnormalized data field, which keep original and structured data in best effort.

Note that both item are super set of schema. It developer's responsibility to check which field is provided when receiving an item. For example, in Rental591Spider, for a single rental house, Scrapy will get:

  1. 1x RawHouseItem + 1x GenericHouseItem during listing all houses, which provide only minimun data field for GenericHouseItem
  2. 1x RawHouseItem + 1x GenericHouseItem during retrieving house detail.

Handlers

All spiders in this package provide the following handlers:

  1. start_list, similiar to start_requests in Scrapy, control how crawler issue search/list request to find all rental houses.
  2. parse_list, similiar to parse in Scrapy, control how crawler handles response from start_list and generate request for detail house info page.
  3. parse_detail, control how crawler parse detail page.

All spiders implements their own default handler, say, default_start_list, default_parse_list, and default_parse_detail, and can be overwrite during __init__. Please see example for how to control spider behavior using handlers.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scrapy_tw_rental_house-2.1.4.tar.gz (28.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

scrapy_tw_rental_house-2.1.4-py3-none-any.whl (31.1 kB view details)

Uploaded Python 3

File details

Details for the file scrapy_tw_rental_house-2.1.4.tar.gz.

File metadata

  • Download URL: scrapy_tw_rental_house-2.1.4.tar.gz
  • Upload date:
  • Size: 28.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.7.1 CPython/3.10.12 Linux/6.8.0-87-generic

File hashes

Hashes for scrapy_tw_rental_house-2.1.4.tar.gz
Algorithm Hash digest
SHA256 35c735744132876733f07e3de82a6517242ed08f22fc68c18a9e2d4afdadd889
MD5 f554f8170b554a1f03fabe8c7aabf237
BLAKE2b-256 780573802decf0b0157dc9075ef50be9711497547f40f9744a94e055814a47c5

See more details on using hashes here.

File details

Details for the file scrapy_tw_rental_house-2.1.4-py3-none-any.whl.

File metadata

File hashes

Hashes for scrapy_tw_rental_house-2.1.4-py3-none-any.whl
Algorithm Hash digest
SHA256 fc538aa59aee62743186148788e298a2f74a259fd87207cbefde81aa98540a95
MD5 a01b65f47b3ed96c16f1f41be0a428b5
BLAKE2b-256 572b09c15e05ef9943d056f9ec737c042c6b54e67b34d145eba4babd7bc5320e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page