Skip to main content

this is a small spider,you can easy running. When you often need to crawl a single site, you don't have to redo some repeated code every time, using this small framework you can quickly crawl data into a file or database.

Project description

lrabbit_scrapy

this is a small spider,you can easy running. When you often need to crawl a single site, you don't have to redo some repeated code every time, using this small framework you can quickly crawl data into a file or database.

Installing

$ pip install lrabbit_scrapy

quick start

  • create blog_spider.py
from lrabbit_scrapy.spider import LrabbitSpider
from lrabbit_scrapy.common_utils.network_helper import RequestSession
from lrabbit_scrapy.common_utils.print_log_helper import LogUtils
from lrabbit_scrapy.common_utils.all_in_one import FileStore
import os
from lrabbit_scrapy.common_utils.mysql_helper import MysqlClient
from parsel import Selector


class Spider(LrabbitSpider):
    """
        spider_name : lrabbit blog spider
    """
    # unique spider name
    spider_name = "lrabbit_blog"
    # max thread worker numbers
    max_thread_num = 2
    # is open for every thread a mysql connection,if your max_thread_num overpass 10 and  in code need mysql query ,you need open this config
    thread_mysql_open = True
    # reset all task_list,every restart program will init task list
    reset_task_config = False
    # open loop init_task_list ,when your task is all finish,and you want again ,you can open it
    loop_task_config = False
    # remove config option,if open it,then confirm option when you init task
    remove_confirm_config = False
    # config_path_name, this is env name ,is this code ,you need in linux to execute: export config_path="crawl.ini"
    config_env_name = "config_path"
    # redis db_num
    redis_db_config = 0
    # debug log ,open tracback log
    debug_config = False

    def __init__(self):
        super().__init__()
        self.session = RequestSession()
        self.proxy_session = RequestSession(proxies=None)
        csv_path = os.path.join(os.path.abspath(os.getcwd()), f"{self.spider_name}.csv")
        self.field_names = ['id', 'title', 'datetime']
        self.blog_file = FileStore(file_path=csv_path, filed_name=self.field_names)

    def worker(self, *args):
        task = args[0]
        mysql_client: MysqlClient
        if len(args) == 2:
            mysql_client = args[1]
            # mysql_client.execute("")
        res = self.session.send_request(method='GET', url=f'http://www.lrabbit.life/post_detail/?id={task}')
        selector = Selector(res.text)
        title = selector.css(".detail-title h1::text").get()
        datetime = selector.css(".detail-info span::text").get()
        if title:
            post_data = {"id": task, "title": title, 'datetime': datetime}
            self.blog_file.write(post_data)
            # when you succes get content update redis stat
            self.update_stat_redis()
        LogUtils.log_finish(task)

    def init_task_list(self):

        # you can get init task from mysql
        # res = self.mysql_client.query("select id from rookie limit 100 ")
        # return [task['id'] for task in res]
        return [i for i in range(100)]


if __name__ == '__main__':
    spider = Spider()
    spider.run()
  • set config.ini and config env variable

    • create crawl.ini,for example this file path is /root/crawl.ini
    [server]
    mysql_user = root
    mysql_password = 123456
    mysql_database = test
    mysql_host = 192.168.1.1
    redis_user = lrabbit
    redis_host = 192.168.1.1
    redis_port = 6379
    redis_password = 123456
    
    [test]
    mysql_user = root
    mysql_password = 123456
    mysql_database = test
    mysql_host = 192.168.1.1
    redis_user = lrabbit
    redis_host = 192.168.1.1
    redis_port = 6379
    redis_password = 123456
    
    • set config env
      • windows power shell
      • $env:config_path = "/root/crawl.ini"
      • linux
      • export config_path="/root/crawl.ini"
  • python3 blog_spider.py

  • python3 blog_spider.py stat

    • show task stat
  • python3 -m lrabbit-scrapy sslpass

    • pass android ssl Links

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lrabbit_scrapy-2.0.6.tar.gz (19.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lrabbit_scrapy-2.0.6-py3-none-any.whl (22.7 kB view details)

Uploaded Python 3

File details

Details for the file lrabbit_scrapy-2.0.6.tar.gz.

File metadata

  • Download URL: lrabbit_scrapy-2.0.6.tar.gz
  • Upload date:
  • Size: 19.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.7.0 importlib_metadata/4.8.2 pkginfo/1.8.2 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.6.8

File hashes

Hashes for lrabbit_scrapy-2.0.6.tar.gz
Algorithm Hash digest
SHA256 1d26490db72ae49d8bee5002e3530edab9f15720a019ea55ddb5d6eaf0f37bb9
MD5 7671dcfe30d2395b1d5cdb679e839057
BLAKE2b-256 f5bfb988b1f8d0dc8cb66a1af891aacaf8a18b6ea9caaaa384f1aaf5f3b33d9c

See more details on using hashes here.

File details

Details for the file lrabbit_scrapy-2.0.6-py3-none-any.whl.

File metadata

  • Download URL: lrabbit_scrapy-2.0.6-py3-none-any.whl
  • Upload date:
  • Size: 22.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.7.0 importlib_metadata/4.8.2 pkginfo/1.8.2 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.6.8

File hashes

Hashes for lrabbit_scrapy-2.0.6-py3-none-any.whl
Algorithm Hash digest
SHA256 93e31005f9c5d2741301a1b225aa835e453b700be66d1f418609c88358e4e03e
MD5 71d7ea32bb32d76b6d6addeb14d63ff2
BLAKE2b-256 cad180970cc7baa62a98109ba839718c7348f380f83686127cb7af51a08851e5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page