Scrapy spider for TW Rental House
Project description
TW Rental House Utility for Scrapy
This package is built for crawling Taiwanese rental house related website using Scrapy. As behavior of crawlers may differ from their goal, scale, and pipeline, this package provides only minimum feature set, which allow developer to list and decode a rental house web page into structured data, without knowing too much about detail HTML and API structure of each website. In addition, this package is also designed for extensibility, which allow developers to insert customized callback, manipulate data, and integrate with existing crawler structure.
Although this package provide the ability to crawl rental house website, it's developer's responsibility to ensure crawling mechanism and usage of data. Please be friendly to target website, such as consider using DOWNLOAD_DELAY or AUTO_THROTTLING to prevent bulk requesting.
Requirement
- Python 3.10+
- Playwright (for 591 spiders)
- PaddleOCR (for 591 spiders)
Installation
poetry add scrapy-tw-rental-house
Install Playwright
We use Playwright default browser (Chromium) to render JavaScript content. Please install Playwright Chromium before using this package.
For more information, please refer to official document
poetry shell
playwright install chromium
591 specific
As 591 implements anti-crawler mechanism, it require additional setup to bypass it. To enable Playwright to bypass 591 anti-crawler mechanism, please ensure you get access to browser developer tool on browsing 591, and copy the setting to settings.py.
BROWSER_INIT_SCRIPT = 'console.log("This command enable Playwright")'
Enable OCR cache
As OCR is a time consuming process, we provide a cache mechanism to store OCR result. To enable OCR cache, please configure scrapy settings.py as following:
# Enable OCR cache
OCR_CACHE_ENABLED = True # default false
OCR_CACHE_DIR = 'path/to/cache' # default to ocr_cache
Speed up browser page loading
This package support skip specific domain request and cache JS. To enable these features, please configure scrapy settings.py as following:
# Enable cache for JS
BROWSER_JS_CACHE_ENABLED = True
BROWSER_JS_CACHE_DIR = 'path/to/cache' # default to js_cache
# Enable skip specific domain request
BROWSER_SKIP_DOMAIN = [
'https://the.unnecessary.domain',
]
Basic Usage
This package currently support 591. Each rental house website is a Scrapy Spider class. You can either crawl entire website using default setting , which will take couple days, or customize the behaviour base on your need.
The most basic usage would be creating a new Spider class that inherit Rental591Spider:
from scrapy_twrh.spiders.rental591 import Rental591Spider
class MyAwesomeSpider(Rental591Spider):
name='awesome'
And than start crawling by
scrapy crawl awesome
Please see example for detail usage.
Items
All spiders populates 2 type of Scrapy items: GenericHouseItem
and RawHouseItem
.
GenericHouseItem
contains normalized data field, spirders from different website will decode their data and fit into this schema in best effort.
RawHouseItem
contains unnormalized data field, which keep original and structured data in best effort.
Note that both item are super set of schema. It developer's responsibility to check which field is provided when receiving an item.
For example, in Rental591Spider
, for a single rental house, Scrapy will get:
- 1x
RawHouseItem
+ 1xGenericHouseItem
during listing all houses, which provide only minimun data field forGenericHouseItem
- 1x
RawHouseItem
+ 1xGenericHouseItem
during retrieving house detail.
Handlers
All spiders in this package provide the following handlers:
start_list
, similiar tostart_requests
in Scrapy, control how crawler issue search/list request to find all rental houses.parse_list
, similiar toparse
in Scrapy, control how crawler handles response fromstart_list
and generate request for detail house info page.parse_detail
, control how crawler parse detail page.
All spiders implements their own default handler, say, default_start_list
, default_parse_list
, and default_parse_detail
, and can be overwrite during __init__
. Please see example for how to control spider behavior using handlers.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file scrapy_tw_rental_house-2.1.3.tar.gz
.
File metadata
- Download URL: scrapy_tw_rental_house-2.1.3.tar.gz
- Upload date:
- Size: 28.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.7.1 CPython/3.10.12 Linux/6.8.0-57-generic
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 |
0aff83466a368976e51fc7bbe6820ae31db591a029c0800b7b0192a0f5fc2da8
|
|
MD5 |
ac6959fc05dcd14628cb79f27541a501
|
|
BLAKE2b-256 |
8e80b8b18e83f0360f5536c35478d4195d475cb8cd3ec102e61ab3c361200c14
|
File details
Details for the file scrapy_tw_rental_house-2.1.3-py3-none-any.whl
.
File metadata
- Download URL: scrapy_tw_rental_house-2.1.3-py3-none-any.whl
- Upload date:
- Size: 30.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.7.1 CPython/3.10.12 Linux/6.8.0-57-generic
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 |
a5c7540c05b880428288002f570e2add3eb0edb45112a2687509ea2808c78c1b
|
|
MD5 |
14f5e0c1f9fead5f820a3decb135f158
|
|
BLAKE2b-256 |
5cfd8770751173590f4c8b1785d4478cbad834ff4a3d1b162d32607da72f368f
|