scrapy util
Project description
Scrapy Spider
启用数据收集功能
此功能配合spider-admin-pro 使用
# 项目名默认是(不需要设置)
BOT_NAME = 'scrapy_demo'
# 设置收集运行日志的路径,会以post方式提交json数据
STATS_COLLECTION_URL = "http://127.0.0.1:5001/api/collection"
使用脚本Spider
# -*- coding: utf-8 -*-
from scrapy import cmdline
from scrapy_util.spiders import ScriptSpider
class BaiduScriptSpider(ScriptSpider):
name = 'baidu_script'
def execute(self):
print("hi")
if __name__ == '__main__':
cmdline.execute('scrapy crawl baidu_script'.split())
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
scrapy-util-0.0.2.tar.gz
(4.2 kB
view hashes)
Built Distribution
Close
Hashes for scrapy_util-0.0.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1485c5288084eb1ca2b73f736f7d6ed42c69d0d80b4cdd0f401c8ca5aeebce5d |
|
MD5 | 8fa31f81812c49f0e28c2bd998c9b340 |
|
BLAKE2b-256 | 1bdd806f4402ec9ed5098fac26babc3d6d09832228a86d88b9e7dac6166be711 |